a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by veen
veen  ·  2585 days ago  ·  link  ·    ·  parent  ·  post: The Story on Amtrak Cascades Train 501 Derailment

I did actually see it all over Twitter and Facebook. Amtrak is behind schedule on their implementation of train control sytems:

    Much has been made of the fact that, once again, positive train control—the automatic braking technology, integrated between cars and tracks, that can detect danger and halt trains—had not been switched on. In 2008, shortly after a fatal collision on Metrolink in L.A. County, Congress mandated PTC implementation on all railroads that carry passengers and/or potentially toxic freight by 2015. Although the upgraded trains and tracks in Washington were equipped for the technology, the system was not slated to go live, corridor-wide, until 2018.

https://www.citylab.com/transportation/2017/12/passenger-rail-has-an-accountability-problem/548687/

Sadly, it takes deadly accidents to spur safety developments:

    The National Transportation Safety Board first recommended the use of “automatic train control” in 1970, a year after two Penn Central commuter trains collided, killing four people and injuring 43.

https://www.theguardian.com/us-news/2017/dec/19/amtrak-derailment-seattle-washington-positive-train-control





user-inactivated  ·  2580 days ago  ·  link  ·  

    Sadly, it takes deadly accidents to spur safety developments:

hello i am here to tell you this is a fact of human nature and also not coincidentally why we should probably care about unfriendly ai

OftenBen  ·  2580 days ago  ·  link  ·  

    unfriendly

Would you say it's fair to maybe use 'uncaring' rather than 'unfriendly' to frame the problem in non-adversarial terms?

user-inactivated  ·  2580 days ago  ·  link  ·  

yes but i would also say i don't care about that distinction or being non-adversarial at all -- uncaring flagamuffin

OftenBen  ·  2569 days ago  ·  link  ·  

Do you think that it's possible that we could produce an AGI with the potential to cause great harm a la the paperclip optimizer scenario? You don't strike me as a prophet of Skynet.