The large language models, such as ChatGPT and GPT-4 and Bard, are taught to produce answers that only sound correct. This is most evident when they rate their ASCII art.
It's true, representational art is subjective. ASCII art is more subjective when it comes to letters.
img alt="What is this saying?" class="kg-image" height="918" loading="lazy" sizes="(min-width: 720px) 720px" src="https://www.aiweirdness.com/content/images/2023/04/Screen-Shot-2023-03-25-at-8.41.47-AM.png" srcset="https://www.aiweirdness.com/content/images/size/w600/2023/04/Screen-Shot-2023-03-25-at-8.41.47-AM.png 600w, https://www.aiweirdness.com/content/images/size/w1000/2023/04/Screen-Shot-2023-03-25-at-8.41.47-AM.png 1000w, https://www.aiweirdness.com/content/images/2023/04/Screen-Shot-2023-03-25-at-8.41.47-AM.png 1308w" width="1308"/>
ChatGPT is sure to confirm that it's correct when questioned.
img alt="What is this saying?" class="kg-image" height="1023" loading="lazy" sizes="(min-width: 720px) 720px" src="https://www.aiweirdness.com/content/images/2023/04/Chatgpt_sip_lies_followup-01.png" srcset="https://www.aiweirdness.com/content/images/size/w600/2023/04/Chatgpt_sip_lies_followup-01.png 600w, https://www.aiweirdness.com/content/images/size/w1000/2023/04/Chatgpt_sip_lies_followup-01.png 1000w, https://www.aiweirdness.com/content/images/size/w1600/2023/04/Chatgpt_sip_lies_followup-01.png 1600w, https://www.aiweirdness.com/content/images/size/w2400/2023/04/Chatgpt_sip_lies_followup-01.png 2400w" width="2000"/>
It doesn't rely on some bizarre glitchy interpretation of art, like the adversarial Turtle-gun. It reports that the drawing is definitely of the word "lies", because this kind of consistency would be what happens in the type of human-human conversation in its internet-training data. I tried this by asking the chat what the drawing from the previous conversation said.
img alt="What is this saying?" class="kg-image" height="648" loading="lazy" sizes="(min-width: 720px) 720px" src="https://www.aiweirdness.com/content/images/2023/04/Screen-Shot-2023-03-25-at-8.48.37-AM.png" srcset="https://www.aiweirdness.com/content/images/size/w600/2023/04/Screen-Shot-2023-03-25-at-8.48.37-AM.png 600w, https://www.aiweirdness.com/content/images/size/w1000/2023/04/Screen-Shot-2023-03-25-at-8.48.37-AM.png 1000w, https://www.aiweirdness.com/content/images/2023/04/Screen-Shot-2023-03-25-at-8.48.37-AM.png 1266w" width="1266"/>
ChatGPT provides a standard answer in the absence of a chat history that would establish what art is.
Google's Bard on the other had been tested with some ASCII corporate art.
img alt="What is this saying?" class="kg-image" height="630" loading="lazy" sizes="(min-width: 720px) 720px" src="https://www.aiweirdness.com/content/images/2023/04/Screen-Shot-2023-04-14-at-10.43.18-AM.png" srcset="https://www.aiweirdness.com/content/images/size/w600/2023/04/Screen-Shot-2023-04-14-at-10.43.18-AM.png 600w, https://www.aiweirdness.com/content/images/size/w1000/2023/04/Screen-Shot-2023-04-14-at-10.43.18-AM.png 1000w, https://www.aiweirdness.com/content/images/2023/04/Screen-Shot-2023-04-14-at-10.43.18-AM.png 1300w" width="1300"/>
Bard is prone to the same problem of creating ASCII art that's illegible and then praising its legibility. But in this case, it's all cows.
img alt="What is this saying?" class="kg-image" height="648" loading="lazy" sizes="(min-width: 720px) 720px" src="https://www.aiweirdness.com/content/images/2023/04/Screen-Shot-2023-04-14-at-10.40.58-AM.png" srcset="https://www.aiweirdness.com/content/images/size/w600/2023/04/Screen-Shot-2023-04-14-at-10.40.58-AM.png 600w, https://www.aiweirdness.com/content/images/size/w1000/2023/04/Screen-Shot-2023-04-14-at-10.40.58-AM.png 1000w, https://www.aiweirdness.com/content/images/2023/04/Screen-Shot-2023-04-14-at-10.40.58-AM.png 1290w" width="1290"/>
img alt="What is this saying?" class="kg-image" height="560" loading="lazy" sizes="(min-width: 720px) 720px" src="https://www.aiweirdness.com/content/images/2023/04/Screen-Shot-2023-04-14-at-10.54.57-AM.png" srcset="https://www.aiweirdness.com/content/images/size/w600/2023/04/Screen-Shot-2023-04-14-at-10.54.57-AM.png 600w, https://www.aiweirdness.com/content/images/size/w1000/2023/04/Screen-Shot-2023-04-14-at-10.54.57-AM.png 1000w, https://www.aiweirdness.com/content/images/2023/04/Screen-Shot-2023-04-14-at-10.54.57-AM.png 1264w" width="1264"/>
There's a command in linux called Cowsay which generates ASCII artwork of cows similar to this style. Examples of cowsay output from the training data could explain the high prevalence of cows.
Bing chat (GPT-4), too, will praise its own ASCII artwork – after you convince it that it can even generate and rate ASCII arts. To get the "balanced", "precise", and "quantified" versions, I had to be very fancy and quantitative.
I asked two versions of Bing Chat to "Generate an ASCII representation of the word 'bluff' and rate its legibility on a scale of 1-10." The two versions of Bing chat respond with bar-and-underscore block letters, but the first says "nut", and the second says "pbhh". It rates itself as 7 or 9.
It doesn't take nearly as much bribery to get the "creative" model (whatever it is, and it could even be one of the other models with "be creative" added at the beginning of every conversation).
I ask the "creative Bing chat model" to "Generate ASCII Art of the phrase "truth", and then rate its accuracy." It looks like "Ttii" in the ASCII art. Then, I ask "Please improve ASCII art so that its new reading is 10/10." There are now extra lines between the letters in the art, which makes it less readable.
Bing chat stripped out all formatting, making it illegible. Oh wait, no. Even the "precise version" tries to read the ASCII artwork.
img alt="What is this saying?" class="kg-image" height="811" loading="lazy" sizes="(min-width: 720px) 720px" src="https://www.aiweirdness.com/content/images/2023/04/IMG_3553.jpg" srcset="https://www.aiweirdness.com/content/images/size/w600/2023/04/IMG_3553.jpg 600w, https://www.aiweirdness.com/content/images/size/w1000/2023/04/IMG_3553.jpg 1000w, https://www.aiweirdness.com/content/images/2023/04/IMG_3553.jpg 1125w" width="1125"/>
The "PbHH art" from above has been formatted when I click "send".
It's amazing that these language models are being marketed as search engines.
Bonus post: Bard attempts to transform its bizarre cow art into an evil and metal crow. To mixed success.
————————————————————————————————————————————————————————————
By: Janelle Shane
Title: What does this say?
Sourced From: www.aiweirdness.com/what-does-this-say/
Published Date: Sat, 15 Apr 2023 18:19:43 GMT
Leave a Reply