You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After a discussion with @nuclearcat , @JenySadadia and @pawiecz regarding the issues exposed in #2455, we identified two diverging opinions on what the KernelCI API should provide as an interface (endpoints, models):
a) a higher-level view of test results and regressions, with expressive query capabilities that clients (users and applications) can use directly. The API should abstract the runtime details about tests such as their intermediate representation by the runtimes, and just store and provide the results in a standard and common way regardless of the test idiosyncrasies.
b) a strictly low-level representation of the test runs, basically an organization of the tests as they are run and of their results as provided by the runtimes. Then, higher-level tools are responsible of processing this low-level data into suitable results for users.
Although I do think that (a) is a more reasonable approach (although a somewhat radical shift from the current design) and it's possibly even compatible with (b), we settled on (b) for now on @nuclearcat 's recommendation.
This issue, then, is a way to encapsulate the efforts to identify and abstract the test result details as provided by the API into something that's more useful for higher-level tools (regression processing, dashboards) and users.
The text was updated successfully, but these errors were encountered:
r-c-n
transferred this issue from kernelci/kernelci-pipeline
Mar 15, 2024
After a discussion with @nuclearcat , @JenySadadia and @pawiecz regarding the issues exposed in #2455, we identified two diverging opinions on what the KernelCI API should provide as an interface (endpoints, models):
a) a higher-level view of test results and regressions, with expressive query capabilities that clients (users and applications) can use directly. The API should abstract the runtime details about tests such as their intermediate representation by the runtimes, and just store and provide the results in a standard and common way regardless of the test idiosyncrasies.
b) a strictly low-level representation of the test runs, basically an organization of the tests as they are run and of their results as provided by the runtimes. Then, higher-level tools are responsible of processing this low-level data into suitable results for users.
Although I do think that (a) is a more reasonable approach (although a somewhat radical shift from the current design) and it's possibly even compatible with (b), we settled on (b) for now on @nuclearcat 's recommendation.
This issue, then, is a way to encapsulate the efforts to identify and abstract the test result details as provided by the API into something that's more useful for higher-level tools (regression processing, dashboards) and users.
The text was updated successfully, but these errors were encountered: