Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

big-endian support working on simulation only #20

Open
samsoniuk opened this issue Aug 19, 2019 · 5 comments
Open

big-endian support working on simulation only #20

samsoniuk opened this issue Aug 19, 2019 · 5 comments
Assignees

Comments

@samsoniuk
Copy link
Member

The big-endian support works only in the simulation. The problem is probably related to the gcc : even in the simulation the -Os optimization appears to not match with the equivalent optimization in little-endian version of gcc.

@samsoniuk
Copy link
Member Author

Fixed! The problem was the BITS_BIG_ENDIAN set to 1 in the gcc. When comparing the code of putchar() for example, there is a test variable&1, where an "addi" is generated by the little-endian compiler, but a "slli" is generated by the big-endian compiler. By setting the BITS_BIG_ENDIAN to 0, the code makes sense again and works in both FPGA and simulation. This fix confirms that the DarkRISCV is accidentally "bi-endian", i.e. the design accidentally provides a way that the hardware and software works with both little and big-endian.

The affected file in the gcc:

./gcc/gcc/config/riscv/riscv.h
89,91c89,91
< #define BITS_BIG_ENDIAN 0

@samsoniuk samsoniuk reopened this Aug 20, 2019
@samsoniuk
Copy link
Member Author

Of course, the problem never is so easy to solve...

@samsoniuk samsoniuk self-assigned this Aug 20, 2019
@samsoniuk
Copy link
Member Author

After some effort to make the gcc generate big-endian output for RISCV, I found a mixed result: the .data* segment was fully generated as big-endian (there are some extra changes in binutils in order to make it work), but the .text* segment is not fully generated. Anyway, after some extra research I found that is possible implement a more intelligent way to handle both big end little-endian memories in the same core. This means that is possible put the compiled .data and .text in little-endian memory areas and put network frames in big-endian memory areas, in a way that the endian handling between that areas can be optimized (optimized means optimized, not transparent).

@zeldin
Copy link

zeldin commented Jun 29, 2021

Hi. Just a quick heads-up: GCC 11 (and binutils 2.36) fully support the -mbig-endian command line option to generate code for big-endian RISC-V.

@samsoniuk
Copy link
Member Author

Wow! This is a very good news! I will take a look as far as possible! :)

@samsoniuk samsoniuk reopened this Jun 30, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants