I have come across several d#atascience #jobs, This one Evaluation Software Engineer is very interesting
The JD States below things
- Organize neural network challenge scenarios and route them to the appropriate evaluation suites
- Collaborate with engineers and program managers to identify which neural network challenge cases are the highest priority to improve
- Investigate if the challenge persists in newer versions of models
My version of understanding
For a vision model for a failed use case - pedestrian not detected, vehicle not detected they triage / prioritize / address
- Why a use case fails, How to enrich the dataset
- Do the key regions are activated when we interpret feature activation across layers
- Prioritize / Add data / customize network if needed / train / validate it is fixed
This reflects how much every scenario is validated, prioritized, and ensured models reflect the real-world scenarios. Most of the time we see ML, DL jobs but not this level of details and clarity.
The JD link
This is the difference between prototype vs production vs updates and how forward-looking they are in the future to handle all scenarios :), Behind all #autopilot models there would be tons of #scenarios and multiple Evaluation Software Engineers and automated suites validating it.
I have never seen a similar type of JD anywhere except Tesla :)
Keep Exploring!!
No comments:
Post a Comment