Showing posts with label Artificial intelligence. Show all posts
Showing posts with label Artificial intelligence. Show all posts

Sunday, May 5, 2024

Artificial Intelligence (AI) and Me – Part 3 - Drawbacks, and Dangers

 

I intended to talk about some of the negative and positive aspects of Artificial Intelligence in this post. However, I decided that the positives and opportunities of AI deserved a more thorough examination. I will do a separate post on those later.

The focus of my post will be on the use of AI in creative work. There are issues in other areas of AI application that I won’t get into.

AI Has No Emotion or Motivation

“Well, he acts like he has genuine emotions. Um, of course he's programmed that way to make it easier for us to talk to him. But as to whether he has real feelings is something I don't think anyone can truthfully answer.” Dave Bowman talking about HAL 9000 in the movie “2001: A Space Odyssey.

Having watched the movie about 30 or 40 times, I think it is meant to be a question that is asked but not answered, but one that the audience is meant to ponder. Does HAL really have emotion, or are his actions just a flaw in his programming?

We have the same question about the real-world AIs we have today. Based on my reading and experience so far, I don’t think current AIs have either emotions or motivation. Maybe someday they will.

In many applications of AI, emotions and motivations are not important. But, when we call on AIs to be creative, emotions and motivations are critical.

In my own experience, and in the opinions I’ve heard from other people, stories written by AIs tend to be bland. They tend toward clichés and lack emotion. That sort of thing is OK in some contexts, like business or technical writing, but it just doesn’t work in creative writing.

Writers, and other creative people are driven to create. No one needs to prompt them. They pour their emotions into their work. That is what makes the art they create interesting to other people.

None of the existing AIs can be described as self-starters. They only react when prompted. They are not motivated to do anything. Left to themselves, they would do nothing. AIs do not want anything. They have no desires.

Homogenization of Content

I came across this idea several times. AIs that generate writing are based on the likelihood that a given word would be followed by another word. This results in writing that hews toward the most common expression. As noted by some, AIs are cliché machines. They don’t go for the unusual, they go for the common.

The trend for the future is toward more homogenized writing. As more of the writing available for training AIs is created by other AIs, less common expressions will be driven out of use. Bland writing from AIs will become blander.

False Hopes for an Easy Writing Career

I’ve seen several articles and YouTube videos claiming that you can be a successful writer using AIs.

I suspect that is unrealistic.

Mostly it is because few writers are successful. The average traditionally published book may only have 3,000 in total sales. With some books selling millions, that means that most books will sell much less than 3,000 copies. Self-published books average only 250 in total sales. (see: https://scribemedia.com/book-sales/)

In my own case, the book I published sold three copies and I made $2.87. It is still for sale if you’d like to bump up my sales to four copies. https://www.lulu.com/shop/james-morison/walk-in-the-snow-a-collection-of-stories-and-articles/ebook/product-1m4ermrp.html

Aside from the low returns on publishing in general, I doubt that many people would want to buy an AI generated book. Nowadays, anyone who wants to, can generate their own AI story, so why pay for someone else’s AI story.

I think that people may well want to generate AI stories, not to sell, but to read themselves. It is already possible to have AI generate a story to your personal specification. You can have a novel where you are the hero.

I haven’t gone to that extent myself, but I have used ChatGPT to create short stories for my own entertainment. I expect that I am not the only one who has done this. While I did get some enjoyment from this, the experience didn’t give me much encouragement about the future of creative writing with AI.

Wrestling with AI

When I experimented with systems like ChatGPT, I find myself fighting with the system to get it to write what I want. In the end, I go back to just writing it myself. If you have something you want to say, or want to say something in a specific way, working with large language models can be very frustrating.

I’m not sure that this limitation will go away any time soon. It may not be a problem if you do not care what is generated, but that isn’t something that happens often.

AI Generated Voices

Many text to voice systems have been used to create audiobooks and YouTube videos. These have the same problem of blandness and lack of emotion as AI writing systems. The reaction to these is often quite negative.

I have used text to voice systems myself in some of my projects. While I have tried some tricks to give the voices a little more character, I still had negative feedback from people. While these voices have improved, they still have that same lack of emotion and blandness that pales compared to the humanity of real actors.

If You Want to be a Writer, Why Would You Use AI?

“I write for the same reason I breathe … because if I didn’t, I would die.” - Isaac Asimov

I read another comment from Isaac Asimov that if you want to be a writer you must enjoy the act of writing. He went on to emphasis that he meant sitting at the keyboard and bleeding your ideas onto paper. (note: this was from the pre home computer days).

I write, and do the other creative activities I do, because I want to do them. Using AI to write stories for me would be like sending an AI assistant to watch a movie I want to see.



This post is a mirror from my main blog http://www.dynamiclethargyfilms.ca/blog

Sunday, March 24, 2024

“Collapsing into the Arms of a Pale Hit Man” Posted

I posted the recording of my writing exercise “Collapsing into the Arms of a Pale Hit Man.” It is supposed to be a romantic story. It took longer than I had hoped because of health issues I had to deal with.

“Collapsing Into the Arms of a Pale Hitman”

Image Creator from Microsoft Bing

2024 3:54

Ursula notices a pale man following her. She confronts him.

Character voices by Voice.ai: Olivia-V20 and Arnold

https://soundcloud.com/dynamiclethargy/collapsing-into-the-arms-of-a-pale-hitman

 

“Artificial Intelligence (AI) and Me”

I have a complete draft of part 3 of my “Artificial Intelligence (AI) and Me” post. I am not totally happy with it, so I plan to work on it some more. Someone suggested I have ChatGPT rewrite it.



This post is a mirror from my main blog http://www.dynamiclethargyfilms.ca/blog

Sunday, March 3, 2024

Artificial Intelligence (AI) and Me – Part 2

Almost everyday I see new developments in the field of Artificial Intelligence (AI), and new opinions that people have about AI. I have tried to keep current in my posts, but inevitably, some of what I say will be outdated, possibly even in just a few days.

AI Problems

In this post I will talk about several problems that have impeded the development of neural network-based AI systems in the past. These same problems will likely continue to be problems for AI in the future.

The Training Data Problem

I think that compiling the training data set for AI poses the most formidable obstacle to creating practical AI systems. Large networks needed a large volume of data. The training data set for ChatGPT3 had 300 billion words. Until the Internet matured, it would have been impossible to find that volume of text data needed to train complex AI systems like ChatGPT.

When I took the Artificial Intelligence class back in the 1990s, they warned us about the need to ensure that the training data set was of high quality. Any errors or mistakes in the data would contaminate the AI, leading to poor quality results. A major part of compiling the data would be to check, and, if necessary, clean the data.

The Intellectual Property Problem

The huge demand for training data has started to run into legal challenges. Writers, other creative people, and owners of intellectual property are concerned that the AI companies are compensating them when their work is used to train AI systems. It could be that my own work may well have been used to train AI systems. I have no idea how I would find out if it had.

OpenAI said that they cannot create a functional AI without the use of copyrighted material. https://arstechnica.com/information-technology/2024/01/openai-says-its-impossible-to-create-useful-ai-models-without-copyrighted-material/. It is likely that some kind of limitation on the use of copyrighted material will emerge. This could restrict the development of AI systems and increase the cost. While there is a large volume of material available in the public domain, this material is often old, and outdated.

The Bias Problem

Bias in the training data is part of the issue of how clean the data is. However, bias is something that deserves special consideration. If the data has a bias, so will the AI. In the course I took in the 1990s, they reinforced this issue time and time again. Sadly, bias has been a problem with many AI systems. While this can have humorous results, it can also result in negative outcomes.

In one case an AI created to identify skin cancers, used the fact that images of actual skin cancers happened to include a ruler for scale, while non cancer images did not. https://venturebeat.com/business/when-ai-flags-the-ruler-not-the-tumor-and-other-arguments-for-abolishing-the-black-box-vb-live/.

Many articles have been published about bias in AI models for law enforcement. For example: https://daily.jstor.org/what-happens-when-police-use-ai-to-predict-and-prevent-crime/ The problem of bias is not limited to policing though.

Bias creeps in during the creation of the training data set. If there is bias in how a law is enforced, the data available about that law will contain that bias. It is essential that the data be checked for bias before training. Not only can the original data be biased, but the people checking for bias may have their own biases, which they may be unaware of.

There was a recent case where attempts by Google to correct bias in an AI model resulted in a different bias. https://globalnews.ca/news/10311428/google-gemini-image-generation-pause/.

It is easy to call out the bias in AI systems. However, it is clear that controlling bias is difficult. Discovering the best ways to address bias in AIs will continue to be a major challenge.

As an aside, I suspect that the underlying cause of the biases people have may well be the same as the cause of bias in AI systems. Maybe learning how to deal with bias in AI systems may help us deal with bias in people.

The Black Box Problem

I spent most of my working career developing and applying transportation forecasting models. These were used to predict what traffic will be like in the future, which were then used to plan the transportation system.

Many people criticized the forecasts the model produced because they saw the model as a black box. They couldn’t see how it worked, so they tended to distrust what they predicted. While I felt that the model could be explained, the explanations were complicated. Few people had the time or patience needed to understand the explanations.

The problem with neural networks is that they truly are black boxes. We can see the inputs and the outputs. We can even look at the parameters inside the AI. But with AI systems that can have 175 billion parameters, it is not practical for people to understand, let alone explain, how the AI got the answer it did.

The black box problem makes it very difficult to fix an AI that isn’t acting the way you want it to. It can’t be debugged in the same way a computer program. It can’t be reasoned with in the same way as you can with a person.

It appears to me that the current approach is to revisit the training data set and modify it before retraining the model. It may be necessary to revise the data and retrain the AI system many times before the users and developers are satisfied.

Almost everyday I see new developments in the field of Artificial Intelligence (AI), and new opinions that people have about AI. I have tried to keep current in my posts, but inevitably, some of what I say will be outdated, possibly even in just a few days.

AI Problems

In this post I will talk about several problems that have impeded the development of neural network-based AI systems in the past. These same problems will likely continue to be problems for AI in the future.

The Training Data Problem

I think that compiling the training data set for AI poses the most formidable obstacle to creating practical AI systems. Large networks needed a large volume of data. The training data set for ChatGPT3 had 300 billion words. Until the Internet matured, it would have been impossible to find that volume of text data needed to train complex AI systems like ChatGPT.

When I took the Artificial Intelligence class back in the 1990s, they warned us about the need to ensure that the training data set was of high quality. Any errors or mistakes in the data would contaminate the AI, leading to poor quality results. A major part of compiling the data would be to check, and, if necessary, clean the data.

The Intellectual Property Problem

The huge demand for training data has started to run into legal challenges. Writers, other creative people, and owners of intellectual property are concerned that the AI companies are compensating them when their work is used to train AI systems. It could be that my own work may well have been used to train AI systems. I have no idea how I would find out if it had.

OpenAI said that they cannot create a functional AI without the use of copyrighted material. https://arstechnica.com/information-technology/2024/01/openai-says-its-impossible-to-create-useful-ai-models-without-copyrighted-material/. It is likely that some kind of limitation on the use of copyrighted material will emerge. This could restrict the development of AI systems and increase the cost. While there is a large volume of material available in the public domain, this material is often old, and outdated.

The Bias Problem

Bias in the training data is part of the issue of how clean the data is. However, bias is something that deserves special consideration. If the data has a bias, so will the AI. In the course I took in the 1990s, they reinforced this issue time and time again. Sadly, bias has been a problem with many AI systems. While this can have humorous results, it can also result in negative outcomes.

In one case an AI created to identify skin cancers, used the fact that images of actual skin cancers happened to include a ruler for scale, while non cancer images did not. https://venturebeat.com/business/when-ai-flags-the-ruler-not-the-tumor-and-other-arguments-for-abolishing-the-black-box-vb-live/.

Many articles have been published about bias in AI models for law enforcement. For example: https://daily.jstor.org/what-happens-when-police-use-ai-to-predict-and-prevent-crime/ The problem of bias is not limited to policing though.

Bias creeps in during the creation of the training data set. If there is bias in how a law is enforced, the data available about that law will contain that bias. It is essential that the data be checked for bias before training. Not only can the original data be biased, but the people checking for bias may have their own biases, which they may be unaware of.

There was a recent case where attempts by Google to correct bias in an AI model resulted in a different bias. https://globalnews.ca/news/10311428/google-gemini-image-generation-pause/.

It is easy to call out the bias in AI systems. However, it is clear that controlling bias is difficult. Discovering the best ways to address bias in AIs will continue to be a major challenge.

As an aside, I suspect that the underlying cause of the biases people have may well be the same as the cause of bias in AI systems. Maybe learning how to deal with bias in AI systems may help us deal with bias in people.

The Black Box Problem

I spent most of my working career developing and applying transportation forecasting models. These were used to predict what traffic will be like in the future, which were then used to plan the transportation system.

Many people criticized the forecasts the model produced because they saw the model as a black box. They couldn’t see how it worked, so they tended to distrust what they predicted. While I felt that the model could be explained, the explanations were complicated. Few people had the time or patience needed to understand the explanations.

The problem with neural networks is that they truly are black boxes. We can see the inputs and the outputs. We can even look at the parameters inside the AI. But with AI systems that can have 175 billion parameters, it is not practical for people to understand, let alone explain, how the AI got the answer it did.

The black box problem makes it very difficult to fix an AI that isn’t acting the way you want it to. It can’t be debugged in the same way a computer program. It can’t be reasoned with in the same way as you can with a person.

It appears to me that the current approach is to revisit the training data set and modify it before retraining the model. It may be necessary to revise the data and retrain the AI system many times before the users and developers are satisfied.



This post is a mirror from my main blog http://www.dynamiclethargyfilms.ca/blog