S
20

A chat with a friend in robotics made me see AI training differently

I was talking to a friend who works on robot arms for a car plant in Detroit. He said they don't just train the AI on perfect, clean data. They run it on video of the line when things go wrong, like a dropped part or a jam. He said, 'If you only show it the right way, it has no idea what to do when the real world gets messy.' That hit different because I always thought more perfect data was the goal. Now I see the value in feeding models the weird, broken, and unexpected stuff too. It makes them tougher for actual use. Has anyone else started adding 'failure case' data to their training sets?
3 comments

Log in to join the discussion

Log In
3 Comments
ramirez.betty
Detroit car plants are still using robot arms... that feels like something from an old movie. My cousin worked on a line there and said the machines broke down all the time. It makes total sense they'd train the AI on the jams and drops. I guess I never really thought about what the data actually looks like... just more of it. Showing it only the perfect run is like teaching someone to drive in an empty parking lot and then throwing them onto the freeway at rush hour.
2
vera_hill
vera_hill1mo ago
Your cousin's story about the robot arms is spot on, @ramirez.betty. It reminds me of how my phone's autocorrect gets worse every year because it only learns from my typos, not what I'm actually trying to say. We keep feeding these systems the messy stuff but forget to show them the goal. That empty parking lot idea is everywhere, like training new cashiers when the store is closed and then being shocked they struggle on a Saturday. Real world data is just a long list of problems waiting to happen.
4
miles72
miles7227d ago
But the messy data is the real world. You can't get better without seeing the actual problems.
8