Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

riscv: Add basic riscv bindings #106

Merged
merged 4 commits into from
Sep 9, 2024

Conversation

TimePrinciple
Copy link
Contributor

@TimePrinciple TimePrinciple commented May 13, 2024

Updated arm64, x86_64, and added riscv kvm bindings according to v6.9 kernel source.

Summary of the PR

Generate riscv bindings, and regenerate arm64, x86-64 bindings from v6.9. I've noticed #92, but it has not seen changes for a while, so I decided to push the work. I'm currently working on adding support to kvm-ioctls on riscv.

And I wonder how to add cross-compilation test CIs in rust-vmm-ci, may be configure a x86_64 ubuntu and set the runner field to /usr/bin/qemu-riscv64 or something?

Requirements

Before submitting your PR, please make sure you addressed the following
requirements:

  • All commits in this PR have Signed-Off-By trailers (with
    git commit -s), and the commit message has max 60 characters for the
    summary and max 75 characters for each description line.
  • All added/changed functionality has a corresponding unit/integration
    test.
  • All added/changed public-facing functionality has entries in the "Upcoming
    Release" section of CHANGELOG.md (if no such section exists, please create one).
  • Any newly added unsafe code is properly documented.

@TimePrinciple TimePrinciple force-pushed the add-riscv-architecture branch 2 times, most recently from 24332c0 to d365bbb Compare May 14, 2024 00:38
@JonathanWoollett-Light
Copy link

We want to integrate testing for RISC-V before merging RISC-V specific code, ideally resolving rust-vmm/rust-vmm-container#96.

@TimePrinciple
Copy link
Contributor Author

We want to integrate testing for RISC-V before merging RISC-V specific code, ideally resolving rust-vmm/rust-vmm-container#96.

I've proposed using x86_64 machine to cross-compile and qemu-riscv64-static to test compiled binary in rust-vmm-ci and rust-vmm-container, please have a look at your convenience.

@rbradford
Copy link
Contributor

@TimePrinciple Can you please split the arm64 & x86-64 bindings regenerations into their own separate patches and put them into a new PR as I think it may be some time until the required RISC-V CI enabling is available.

@TimePrinciple
Copy link
Contributor Author

@TimePrinciple Can you please split the arm64 & x86-64 bindings regenerations into their own separate patches and put them into a new PR as I think it may be some time until the required RISC-V CI enabling is available.

Sure thing. But am I going to do it with the latest kernel version or just stick to v6.9.

@rbradford
Copy link
Contributor

@TimePrinciple Can you please split the arm64 & x86-64 bindings regenerations into their own separate patches and put them into a new PR as I think it may be some time until the required RISC-V CI enabling is available.

Sure thing. But am I going to do it with the latest kernel version or just stick to v6.9.

v6.9 only just came out - it will be a while for another so seems fine to stick with that.

@TimePrinciple
Copy link
Contributor Author

@TimePrinciple Can you please split the arm64 & x86-64 bindings regenerations into their own separate patches and put them into a new PR as I think it may be some time until the required RISC-V CI enabling is available.

Sure thing. But am I going to do it with the latest kernel version or just stick to v6.9.

v6.9 only just came out - it will be a while for another so seems fine to stick with that.

I mean if I'm going to split the x86_64 and arm bindings into a new PR, what kernel version would be suitable for me to do the regeneration.

@rbradford
Copy link
Contributor

@TimePrinciple Can you please split the arm64 & x86-64 bindings regenerations into their own separate patches and put them into a new PR as I think it may be some time until the required RISC-V CI enabling is available.

Sure thing. But am I going to do it with the latest kernel version or just stick to v6.9.

v6.9 only just came out - it will be a while for another so seems fine to stick with that.

I mean if I'm going to split the x86_64 and arm bindings into a new PR, what kernel version would be suitable for me to do the regeneration.

I would stick with v6.9 - just split your existing work into different commits - no need to change the content. :-)

@TimePrinciple
Copy link
Contributor Author

TimePrinciple commented Sep 5, 2024

It seems that net.ipv4.ip_forward is not enabled on runners, which causes the network problem inside QEMU.

I currently have no clue on what's going on inside the runners, I tried those CI commands locally, everything works fine.

I propose vendor all dependencies outside QEMU, which could address this problem, but a lot of modifications are to be made.

@stefano-garzarella
Copy link
Member

It seems that net.ipv4.ip_forward is not enabled on runners, which causes the network problem inside QEMU.

I currently have no clue on what's going on inside the runners, I tried those CI commands locally, everything works fine.

I propose vendor all dependencies outside QEMU, which could address this problem, but a lot of modifications are to be made.

In the container we do echo 'nameserver 8.8.8.8' > $ROOTFS_DIR/etc/resolv.conf, I'm not sure if that IP is accessible, so maybe we should use the host as nameserver. In https://wiki.qemu.org/Documentation/Networking it looks like the default virtual DNS server is 10.0.2.3 or You can specify the guest-visible virtual DNS server address with -netdev user,id=n0,dns=addr

@stefano-garzarella
Copy link
Member

It seems that net.ipv4.ip_forward is not enabled on runners, which causes the network problem inside QEMU.
I currently have no clue on what's going on inside the runners, I tried those CI commands locally, everything works fine.
I propose vendor all dependencies outside QEMU, which could address this problem, but a lot of modifications are to be made.

In the container we do echo 'nameserver 8.8.8.8' > $ROOTFS_DIR/etc/resolv.conf, I'm not sure if that IP is accessible, so maybe we should use the host as nameserver. In https://wiki.qemu.org/Documentation/Networking it looks like the default virtual DNS server is 10.0.2.3 or You can specify the guest-visible virtual DNS server address with -netdev user,id=n0,dns=addr

BTW I'm expecting that the DHCP implemented by the user netdev should also set the DNS, so why we need to set /etc/resolv.conf ?

@roypat
Copy link
Contributor

roypat commented Sep 6, 2024

GitHub's PR linking strikes again

@roypat roypat reopened this Sep 6, 2024
Bumps [rust-vmm-ci](https://github.com/rust-vmm/rust-vmm-ci) from `9f641f2` to `55ee075`.
- [Commits](rust-vmm/rust-vmm-ci@9f641f2...55ee075)

Which has RISC-V CI ready to be enabled.

Signed-off-by: Ruoqing He <[email protected]>
@TimePrinciple
Copy link
Contributor Author

I would like to know how do we fix this

@roypat
Copy link
Contributor

roypat commented Sep 6, 2024

I would like to know how do we fix this

You can just adjust the value in the json file, the toolchain update caused this in a few repositories for some reason

@TimePrinciple
Copy link
Contributor Author

It's ready now, please take a look when convenient @stefano-garzarella @roypat @rbradford , many thanks 😊

rbradford
rbradford previously approved these changes Sep 6, 2024
src/lib.rs Outdated Show resolved Hide resolved
Introduce RISC-V KVM bindings for Linux kernel v6.9, and enable RISC-V
CI in `.platform` file.

Signed-off-by: Ruoqing He <[email protected]>
Replace `aarch` with `arm` to meet the expected architecture.

Signed-off-by: Ruoqing He <[email protected]>
Setting code coverage to 79.17% as `test_coverage.py` reported.

Signed-off-by: Ruoqing He <[email protected]>
@roypat roypat merged commit f82eed5 into rust-vmm:main Sep 9, 2024
2 checks passed
@TimePrinciple TimePrinciple deleted the add-riscv-architecture branch September 10, 2024 01:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants