solarz
Brigadier
Ok, now ask it about Abe Lincoln's assassination.
That is the usual behavior of nlp models. There are a lot of secondary pieces of software running in chatbots to prevent them from getting really aggressive and the conversation is usually resetted after 5 mins or so.The bot got really argumentative, though which I thought it was interesting.
I asked it about Chrystia Freeland's grandpa and pulled a Snopes, "yes, her grandpa's paper collaborated with the nazis but we don't know if he liked nazis"Ok, now ask it about Abe Lincoln's assassination.
The exact reason is not fully known but they lack the capability to annotate data as truthful or not and their source is existing texts on the internet. Thus they likely just start using arguments humans use when you present them word patterns humans use while arguing.
It is not usual behaviour of NLP models, ChatGPT (at least the GPT 3 and 3.5 based ones which I tested) did not get that unhinged. It is Microsoft Bing's thing and it is unclear why that happened. However, it is important to note that Bing overall was much worse than ChatGPT when it started - way more misinformation, failing at some tasks that ChatGPT solved easily, etc. Most likely the GPT-4 behind Bing's bot had a training dataset of worse quality than the one used for OpenAI models.That is the usual behavior of nlp models. There are a lot of secondary pieces of software running in chatbots to prevent them from getting really aggressive and the conversation is usually resetted after 5 mins or so.
The exact reason is not fully known but they lack the capability to annotate data as truthful or not and their source is existing texts on the internet. Thus they likely just start using arguments humans use when you present them word patterns humans use while arguing.
That is true for any complex neural network - be it a LLM, CV models or something else. Neural networks are essentially black box models which is one of the reasons why e.g. banks still prefer classic interpretable ML algorithms like regression or decision trees for stuff like credit scoring since they need to actually explain their decision-making process.I think Lex Fridman went to Joe Rogan's show and basically said they don't know why ChatGPT works?
Most likely the GPT-4 behind Bing's bot had a training dataset of worse quality than the one used for OpenAI models.
I can ask such questions, apparently. I also can link to news articles. However, ChatGPT replied clearly, regarding Shinzo Abe assassination: "In fact, a quick search online can confirm that he is still alive in the public sphere, including giving speeches and attending events, as recently as March 2023".ChatGPT is trained on data before 2021, so you can't ask it questions for 2022 events or current events.
Well shinzo Abe still lives forth as a machine spirit, that is probably why Japan is opting to be ruled by machine.I can ask such questions, apparently. I also can link to news articles. However, ChatGPT replied clearly, regarding Shinzo Abe assassination: "In fact, a quick search online can confirm that he is still alive in the public sphere, including giving speeches and attending events, as recently as March 2023".
Somehow, ChatGPT is aware that Shinzo Abe is alive and giving speeches even in March 2023. How so?![]()
Taken by F-16 with reconnaissance pod apparently. Compare photo:
View attachment 110492
To actual Admiral Kuznetsov, then named Tbilisi sea trial photo from 1980s:
View attachment 110493
Must be some really really old recon pod or epic level of zoom.