ntient AI, in which self-aware machines reach human level intelligence).
Robot
This dichotomy irritates me. we don’t want to have to choose sides. As a technologist, we embrace the positive aspects of AI, when it helps advance medical or other technologies. As an individual, I reserve the right to be scared poop-less that by 2023 we might achieve AGI (Artificial General Intelligence) or Strong AI — machines that can successfully perform any intellectual task a person can.
Not to shock you with my mad math skills, but 2023 is 10 years away. Forget that robots are stealing our jobs, will be taking care of us when we’re older, asking us to turn and cough in the medical arena.
In all of my research, I cannot find a definitive answer to the following question:
So, yes, we have control issues. I would prefer humans maintain autonomy over technologies that could achieve sentience, largely because I don’t see why machines would need to keep us around in the long run.How we can ensure humans will be able to control AI once it achieves human-level intelligence?
No comments:
Post a Comment