Is artificial intelligence something that should be embraced or feared? Do you believe in the hype?
It seems to me that much of the public hype regarding artificial intelligence in the recent past has been characterized by “fun” technologies, like Watson and AlphaGo which have been able to play games and beat humans in “trivial” matters, so to speak. While the uses of such technologies may be “gimmicky” in nature, their basis is not. At their core, they are making decisions based off information that they currently have and then adapting their behavior based on information that they receive. The more data that they obtain, the more informed their decisions are, and the better they will perform. This basic concept applies widely in the field of AI – if Watson can use all the data handed to it by its developers and acquired through its interactions and competition with opponents to win a trivia game, there seems no reason an automated car can’t use this same process – utilizing the data programmed into it by its developers and acquired through its interactions with its surroundings, environment, etc. to drive itself around. Granted, this use case is obviously on a larger scale, has much further reaching consequences, and requires extremely broad data sets. But, the data sets available are only becoming greater and greater, especially as technologies are becoming more integrated and interconnected with each other, and the core development concepts are the same. Watson and his “peers” stand maybe not as full proof of the viability of AI, but definitely as solid evidence towards it.
Thus, I think the increasing trend towards automation is somewhat inevitable, and likewise the discomfort and uncertainty that comes with change and delving into the unknown are inevitably feelings many of us will experience. However, I definitely don’t think we should simply accept AI as something to be feared. For instance, one of the things that Prof. Walter Scheirer discussed as part of his guest lecture is the belief that facial recognition technologies have “inherent” biases based on race, gender, etc. If we simply accept this to be true and label these technologies as discriminatory and evil and not to be trusted, we are accepting that the possible consequences of their use are something we should be afraid of. In doing so we are giving up on the hope of improvement, and we benefit no one by doing this. Rather, we need to get past our fears and actually make efforts to fix and develop this technology further with the goal of removing the biases rather than simply accepting them.
I have to make the note, though, that this all might be easier for me to say because my career path is in the field of technology. Increasing automation and developing software is likely only going to widen the expanse of opportunities that are open to me and to others like me with a computer science background and quality education. My opinions might indeed differ if I made my living in the manufacturing field or as a truck driver, positions considered to be at the greatest risk when it comes to this discussion.
Thus, as much as I don’t believe fearing AI is beneficial to anyone, I also don’t believe it should be entirely embraced and adopted with open arms. Understanding its risks and the possible issues that could come from its implementation is essential to making sure things don’t spiral out of control too quickly. As usual when it comes to technology, the speed at which things progress, the competition among companies to develop the next best tech, the idea of innovating and moving forward at all costs, can theoretically cause a lot of problems. We can’t even fully understand the implications of one technology before we’re on to the next one. As some of the readings discussed, economists have not been able to come to a definitive conclusion regarding whether manufacturing job losses in the past two decades have stemmed from automation and robots or from trade deals or from some combination of these things and others. If we don’t know what is causing a problem for certain, we most definitely are going to struggle to fix it. This is why I believe we have to be very cognizant of all the issues that can arise from a new AI technology’s implementation. We have to do our best testing and research to predict faults and disruptions that these technologies may have, so that we don’t fall so far behind that we can’t develop solutions to these problems. This may mean delaying the adoption of AI slightly- not taking it in with open arms but with a very slow embrace, so to speak.