top of page
Writer's pictureRay Alner

Artificial Intelligence and The (Very) Near Future

I had two similar articles come up while I was thinking of a topic for this weeks blog. One from a white paper and one from a podcast I listen to. How will Artificial Intelligence (AI) impact the future of how we interact with technology and how is it currently showing up in our daily lives.

The White Paper

The white paper first, called “Why addressing ethical questions in AI will benefit organizations.” It described the ethical problem of how as AI gets better, people’s trust in the system will need to be equally increased in order to trust the interactions that people have with the AI. As AI takes a bigger role in screening job applicants or pattern recognition in web traffic, or making decisions about insurance claims or applications, or in tracking and neutralizing new computer threats. Many people are asking the question as to what is behind these AI programs and how should they know if they are the target of some black box AI, meant to suck up as much data as possible. There is a lot more to this article, and its a good read, but for brevities sake, I’ll keep it short here.


The Podcast

The other from Daniel Miessler’s podcast was interesting as it was about a Reddit bot that was writing lengthy articles using GTP3 (OpenAI). For those who are not familiar, you can learn more on Wikipedia, but basically, GTP3 is a deep learning AI system that can write human-like articles by reading and learning from others (even your) writing style; you can read one here. In the podcast Daniel talks about the capability of these AI’s being able to comment for the user, using the writers language nuances, without any users input.


My Thoughts

A couple years ago, I would have scoffed at any AI trying to “help” me, as most of my interactions were simple and ineffective. Recently though I had a rather complicated problem I tried it out on (I had some issues with a CNAME setup on a website) and I was able to type the problem in and it came back with one solution that ended up fixing the problem. They do seem to be getting better.


There are two things I think will end up happening to help from both an ethical standpoint and security standpoint when working with AI, since they will end up changing the way we work with people.


First: I read a book that described the future of technology with a central data hub with all information marked with a certificate of authenticity from the central data hub, so it was easy to tell if it was real or fake. I think the future of AI will be similar. There will end up being some governing (either self governing or laws put in place) of what is human written and what is AI written, and if so, what AI wrote it. If it was written by human, it would be marked with a secure certificate of authenticity and a certificate that nothing has been changed from its write date, given by either a blockchain or by a central system. With that system, it would be easier to tell what information was written by AI and what was written by humans.


Second: These AI products whether job screening or insurance claims adjustments, will need to be heavily tested and very open. I hope that people will resist when they see an AI that is “black-boxed” to not trust it. The more we put trust in a system that is closed off for both review and input the more we will end up where we are now, with a system of obscene abuse of data, and our human rights.


Remember, these programs are written by biased developers and the AI can only be as good as they are programmed. No matter how unbiased the developer thinks they are, they will always have some input and baselines they have to construct. While there’s profit to be made and very good things that can come out of this sort of AI, we will have to be very careful with what and how we trust it, otherwise it can change our world in ways that we might not be able to return from.

9 views0 comments

Comments


bottom of page