Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Usage of COCO Object Detection Metrics #186

Open
pritamdodeja opened this issue Sep 25, 2024 · 0 comments
Open

Usage of COCO Object Detection Metrics #186

pritamdodeja opened this issue Sep 25, 2024 · 0 comments

Comments

@pritamdodeja
Copy link

Please go to Stack Overflow for help and support:

https://stackoverflow.com/questions/tagged/tensorflow-model-analysis

If you open a GitHub issue, here is our policy:

  1. It must be a bug, a feature request, or a significant problem with
    documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.

Here's why we have that policy: TensorFlow Model Analysis developers respond
to issues. We want to focus on work that benefits the whole community, e.g.,
fixing bugs and adding features. Support only helps individuals. GitHub also
notifies thousands of people when issues are filed. We want them to see you
communicating an interesting problem, rather than being redirected to Stack
Overflow.


System information

  • Have I written custom code (as opposed to using a stock example script
    provided in TensorFlow Model Analysis)
    : No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Fedora 40
  • TensorFlow Model Analysis installed from (source or binary): binary (pypi)
  • TensorFlow Model Analysis version (use command below):'0.46.0'
  • Python version: 3.10.14
  • Jupyter Notebook version:jupyterlab==3.6.7
  • Exact command to reproduce:

You can obtain the TensorFlow Model Analysis version with

python -c "import tensorflow_model_analysis as tfma; print(tfma.version.VERSION)"

Describe the problem

Describe the problem clearly here. Be sure to convey here why it's a bug in
TensorFlow Model Analysis or a feature request.

This might be an issue with documentation. I'm using yolov8 for an object detection task. Relevant technical details below. I would like to use COCO Object Detection Metrics in TFMA, but I'm unable to find any examples that show how to do this. I am able to visualize the distribution of input examples across classes using this:

eval_config = text_format.Parse(                                                                                                                                                                                     
                                       """                                                                                                                                                                           
        model_specs {                                                                                                                                                                                                
          signature_name: "serving_default"                                                                                                                                                                          
          prediction_key: "predictions" # placeholder                                                                                                                                                                
          label_key: "labels" # placeholder                                                                                                                                                                          
        }                                                                                                                                                                                                            
        slicing_specs {}                                                                                                                                                                                             
        slicing_specs {                                                                                                                                                                                              
            feature_keys: ["label"]                                                                                                                                                                                  
        }                                                                                                                                                                                                            
                                                                                                                                                                                                                     
        metrics_specs {                                                                                                                                                                                              
          metrics {                                                                                                                                                                                                  
            class_name: "ExampleCount"                                                                                                                                                                               
            # config:'"iou_thresholds":[0.5], "class_id":0,'                                                                                                                                                         
            #        '"max_num_detections":100, "name":"iou0.5", "labels_to_stack":["bbox", "label"]'                                                                                                                
          }                                                                                                                                                                                                          
        }                                                                                                                                                                                                            
        metrics_specs {                                                                                                                                                                                              
        output_names: ["label"]                                                                                                                                                                                      
                                                                                                                                                                                                                     
                                                                                                                                                                                                                     
        }                                                                                                                                                                                                            
        """, tfma.EvalConfig())

Input:

[ins] In [21]: X.shape
Out[21]: TensorShape([4, 640, 640, 3])

Label:

[ins] In [23]: y.keys()
Out[23]: dict_keys(['boxes', 'classes'])

[ins] In [24]: y
Out[24]: 
{'boxes': <tf.Tensor: shape=(4, 1, 4), dtype=float32, numpy=
 array([[[270.22223, 421.30017, 715.66223, 600.74664]],
 
        [[554.8021 , 578.6413 , 175.54286, 193.96402]],
 
        [[612.69336, 459.9846 , 224.14223, 133.12   ]],
 
        [[560.3555 , 471.36237, 251.44888,  65.99111]]], dtype=float32)>,
 'classes': <tf.Tensor: shape=(4, 1), dtype=int64, numpy=
 array([[6],
        [6],
        [3],
        [2]])>}

Model compilation:

model.compile(                                                                                                                                                                                               
        optimizer=optimizer, classification_loss="binary_crossentropy", box_loss="ciou"                                                                                                                              
    )

Model prediction:

[ins] In [26]: model.predict(X).keys()
1/1 [==============================] - 0s 84ms/step
Out[26]: dict_keys(['boxes', 'confidence', 'classes', 'num_detections'])
[ins] In [28]: model.predict(X)['boxes']
1/1 [==============================] - 0s 92ms/step
Out[28]: 
array([[[-431.99997 , -310.47818 ,  544.      ,  393.79114 ],
        [-400.      , -426.98306 ,  544.      ,  506.9677  ],
        [-304.      , -431.9941  ,  544.      ,  511.95975 ],
        ...,
        [ 594.22614 ,  199.95639 ,  148.6546  ,  283.17026 ],
        [ 502.5973  ,  375.44495 ,  205.77087 ,  177.3197  ],
        [ 491.90588 ,  219.66116 ,  257.7141  ,  221.53989 ]],

       [[-431.99713 , -337.42957 ,  543.99713 ,  423.6395  ],
        [-368.      , -431.72815 ,  544.      ,  508.2921  ],
        [-304.      , -368.69293 ,  544.      ,  448.2409  ],
        ...,
        [-163.98816 ,  172.93243 ,  371.98816 ,  457.06378 ],
        [ -48.008606,   30.734238,  352.0086  ,  241.25737 ],
        [-422.85272 ,  295.79327 ,  534.8527  ,  410.34576 ]],

       [[-431.99936 , -312.31406 ,  543.9994  ,  406.8303  ],
        [-368.      , -354.40826 ,  544.      ,  404.39636 ],
        [-176.      , -431.99796 ,  544.      ,  488.12885 ],
        ...,
        [  -1.      ,   -1.      ,   -1.      ,   -1.      ],
        [  -1.      ,   -1.      ,   -1.      ,   -1.      ],
        [  -1.      ,   -1.      ,   -1.      ,   -1.      ]],

       [[-399.76962 , -394.83652 ,  543.76965 ,  482.42383 ],
        [-303.99673 , -371.62524 ,  543.9967  ,  450.7735  ],
        [-240.      , -431.732   ,  544.      ,  500.4899  ],
        ...,
        [ 555.43005 ,  307.50546 ,  178.5935  ,  307.58475 ],
        [ 500.02817 ,  306.5144  ,  212.86801 ,  227.49939 ],
        [  -1.      ,   -1.      ,   -1.      ,   -1.      ]]],
      dtype=float32)


[ins] In [29]: model.predict(X)['classes']
1/1 [==============================] - 0s 99ms/step
Out[29]: 
array([[ 1,  1,  1,  1,  1,  2,  2,  7,  1,  1,  7,  2,  2,  1,  1,  7,
         1,  7,  2,  1,  1,  1,  1,  1,  7,  1,  1,  1,  1,  1,  7,  7,
         1,  1,  1,  1,  1,  1,  1,  7,  7,  1,  7,  1,  1,  1,  1,  7,
         7,  1,  1,  2,  1,  1,  1,  1,  1,  2,  1,  2,  1,  1,  1,  1,
         1,  1,  2,  2,  1,  1,  2,  2,  1,  2,  2,  2,  2,  1,  7,  7,
         7,  7,  7,  7,  7,  7,  4,  4,  7,  7,  4,  4,  2,  4,  2,  2,
         2,  4,  4,  4],
       [ 1,  1,  2,  2,  2,  7,  2,  1,  1,  7,  1,  1,  1,  1,  2,  1,
         1,  1,  1,  7,  1,  1,  7,  1,  1,  2,  2,  1,  7,  1,  1,  1,
         1,  1,  7,  7,  1,  2,  7,  2,  1,  1,  7,  2,  7,  1,  1,  1,
         1,  2,  7,  7,  7,  1,  1,  1,  7,  1,  2,  1,  2,  2,  2,  2,
         1,  7,  1,  2,  2,  2,  7,  7,  7,  1,  1,  7,  2,  7,  2,  2,
         7,  7,  7,  7,  7,  2,  7,  7,  7,  2,  2,  7,  2,  7,  7,  7,
         7,  7,  7,  1],
       [ 1,  7,  2,  1,  7,  7,  1,  2,  7,  1,  1,  2,  1,  1,  2,  1,
         7,  1,  1,  1,  1,  7,  1,  2,  1,  7,  7,  1,  1,  1,  1,  1,
         1,  1,  1,  2,  7,  7,  7,  2,  1,  2,  1,  1,  1,  1,  2,  1,
         1,  1,  1,  7,  1,  2,  2,  1,  7,  2,  2,  2,  2,  1,  1,  7,
         1,  7,  2,  7,  1,  7,  7,  2,  2,  1,  7,  7,  2,  7,  7,  2,
         1,  7,  1,  7,  7,  7,  7,  7,  2,  2,  7,  7,  2, -1, -1, -1,
        -1, -1, -1, -1],
       [ 7,  1,  2,  1,  7,  7,  7,  2,  7,  7,  2,  1,  7,  2,  7,  7,
         1,  1,  1,  7,  1,  1,  1,  7,  2,  1,  1,  2,  7,  2,  2,  2,
         2,  7,  1,  1,  2,  7,  2,  2,  7,  1,  7,  7,  1,  7,  2,  1,
         2,  2,  7,  7,  2,  2,  7,  2,  2,  1,  7,  7,  7,  7,  1,  7,
         7,  7,  7,  7,  7,  7,  7,  7,  7,  2,  7,  7,  2,  2,  7,  7,
         2,  7,  2,  7,  2,  2,  2,  2,  2,  7,  2,  7,  2,  2,  2,  2,
         7,  2,  2, -1]])
ins] In [32]: model.predict(X)['confidence'].shape
1/1 [==============================] - 0s 98ms/step
Out[32]: (4, 100)

[ins] In [34]: model.predict(X)['num_detections']
1/1 [==============================] - 0s 109ms/step
Out[34]: array([100, 100,  93,  99], dtype=int32)



Source code / logs

Include any logs or source code that would be helpful to diagnose the problem.
If including tracebacks, please include the full traceback. Large logs and files
should be attached. Try to provide a reproducible test case that is the bare
minimum necessary to generate the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant