all comments

[–] -0 points  

Interesting question. Deep Learning has a "black box" problem where you really have to dig if you want to answer Why one decision or another was made for any one set of inputs.

If you trace back all the layers upon layers of Convolution filters involved in Reinforcement Learning of a Reward Function defined as "Stay in your lane = rewarding", there will be other things involved then simply "White line in middle". There will also likely be "Non-asphalt bits to left and right" and other similar filters. These might be filtering for all kinds of properties, even textures, things like "hairy" or "lumpy".

[–] -0 points  

Okay here's something.

Saliency map for self-driving car. Seems to show it mostly pays "attention" to the outside of corners where it might run off the road.

https://www.youtube.com/watch?v=w6XHI1oIbOQ

https://jacobgil.github.io/deeplearning/vehicle-steering-angle-visualizations

[–] -0 points  

It does spend a long time looking at lines too.

[–]

-0 points  

I doubt that many as I have never seen one, I do have concerns too, whether they are accidental or not.