Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: pre-checker script #39

Open
alexandrujuncu opened this issue Mar 11, 2013 · 6 comments
Open

Feature request: pre-checker script #39

alexandrujuncu opened this issue Mar 11, 2013 · 6 comments
Labels

Comments

@alexandrujuncu
Copy link

In the current architecture, each assignment is associated with a virtual machine and a checker. There are situations where this modes doesn't match the needs of the assignment.

For example we want a mechanism where a submitted homework needs to run on a virtual machine and then run the same on different virtual machine. Or another case where the virtual machine is not on the checker machine and the vm should first be copied on the tester.

I am proposing a rather large feature request: a sort of a script before the checker script for the setup before the run. A pseudoscript for this could be something like:

upload assesment on vm1
result1 = vm1.runchecker()
upload assesment on vm2
result2 = vm2.runchecker()
return (result1+result2)/2

Or:

get vm1 from ssh://admin@server/home/$student/vm.zip
unzip vm1.zip
upload assesment on vm1
result = vm1.runchekcer()
return result

@mihaimaruseac
Copy link
Member

I'd also add the fact that we also need to only run a simple script checking some MD5 for some assignments, without needing special VMs for homework grading.

@cojocar
Copy link
Contributor

cojocar commented Sep 13, 2014

@mihaimaruseac : there is a SubmitOnly feature implemented. It was briefly tested. See #59.
@alexandrujuncu : Do you have a concrete example with running on two machines -- I'm not sure if I understood? Also, large submission feature is already implemented.

@alexandrujuncu
Copy link
Author

@cojocar: the first example was for a SO1 homework: the student has to submit one archive and the code needs to be tested both on Linux and on Windows. There is only one assignment so the grade is calculated following runs on both machines.

@cojocar
Copy link
Contributor

cojocar commented Sep 13, 2014

@alexandrujuncu I find it too specific for having it as feature.

@cojocar cojocar added the low label Sep 13, 2014
@alexandrujuncu
Copy link
Author

@cojocar that's why the feature was not specificity for that usecase, but for a flexible scripting interface (maybe a python wrapper) to cover many scenarios.
And I remembered the 2nd scenario (example). It was for USO. The students had some VMs on a cluster and they worked on their homework there. So each virtual machine WAS the assignment. And vmchecker needed to test the entire vm.

@calin-iorgulescu
Copy link
Contributor

Pull request #73 adds support for custom submissions runners. It allows TAs to write their own runner which extends a base class Runner that defines exactly how a submission is executed. It provides access to the Host and VM APIs, which allow the starting, restarting and stopping of VMs. For example, you could reboot your VM and run tests in different phases, depending on the kernel being used. Interacting with GRUB or similar can even allow you to reboot into another OS (e.g., Windows).

It could be, potentially, extended to use multiple VMs as well. It would require a bit of hacking in vmchecker-vm-executor, but most of the functionality should be there already now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants