Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Systematic testing #2658

Open
AntoinePrv opened this issue Feb 22, 2023 · 5 comments
Open

Systematic testing #2658

AntoinePrv opened this issue Feb 22, 2023 · 5 comments

Comments

@AntoinePrv
Copy link
Contributor

Tests typically use the xt::xarray, sometimes xt::xtensor, but it turns there are some subtle differences between all available containers/expressions/views that leads to error when switching one for another.

A few issues have been caught only when reported by users, who then have to wait for another release to get the fix. For instance

Tests should be refactor and templated to systematically run with all

  • xt::xtensor_fixed
  • xt::xtensor
  • xt::xarray
  • A view
  • An expression
  • 1D, ND
  • The Python equivalent
  • The R equivalent
  • The Julia equivalent

Since this may generate a lot of tests, we would need to have strategy to not run/compile all of them at the same time (perhaps that would be only in CI, in parallel).

I am unsure if testing with project in other repositories (language bindings) should be done here or there.
Depending if the fix needs to be in xtensor or in the other repository, there will always be a case or PR deadlock.

@JohanMabille
Copy link
Member

A way to cover most of the use cases from the bindings is to test:

  • with column-major order
  • with a tensor whose shape is a vector of signed integers

This should catch 99% of the issues. Other issues would be in the downstream projects themselves.

@tdegeus
Copy link
Member

tdegeus commented Feb 28, 2023

I concur. I too have had my annoying small bugs or missing templates:

That is also why I proposed

What I was planning to do is to write a Python module with most of the xtensor's functions with types xtensor_fixed, xtensor, xarray, pytensor, and pyarray and then test every function against NumPy based on random input. That would cover at least part of what you list above. The only reason that I was hesitating is maintainability. How would we ensure that all possible functions are included?

@JohanMabille
Copy link
Member

JohanMabille commented Feb 28, 2023

How would we ensure that all possible functions are included?

I'm not sure that we can; but that should not be a blocker. Also I'm not sure we need to tests with pytensor and pyarray (or that means the tests should leave in another repo), as long as we tests on "containers" with similar properties (like the signed int shapes).

@AntoinePrv
Copy link
Contributor Author

@JohanMabille remember we also had that pytensor issue due to the fact that the shape was a view on the Python data.

@tdegeus
Copy link
Member

tdegeus commented Mar 2, 2023

An initial proposition for the Python binding : xtensor-stack/xtensor-python#288 . It would be great if we can discuss this a bit.

Furthermore:

  • I'm a bit worried about some the timings... It seems that some of NumPy's function might be really quite faster, or there might be some huge blocker with allocation?
  • The test is finding a first bug (that I stared debugging in Average: debugging bug #2569 )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants