There was really a lot of thinking involved #ChatGPT #ai #LLMs
https://www.reddit.com/r/ChatGPT/comments/1naxubm/dont_worry_our_jobs_are_safe/

Why #OpenAI’s solution to #AI hallucinations would kill #ChatGPT tomorrow
Hallucination rates are fundamentally bounded by how well AI systems can distinguish valid from invalid responses.
Calculating whether something is likely to be invalid, results on a high number of “I don’t know” impacting the experience heavily.
[en] Questions after story "Help! My #therapist is secretly using #ChatGPT"
It's not about bots built specifically for #therapy.
"... one of my [author] main takeaways ... was that therapists should absolutely #disclose when they’re going to use #AI and how (if they plan to use it)."
"If they don’t, it raises all these really uncomfortable questions for patients when it’s uncovered and risks irrevocably damaging the #trust that’s been built."
"At present, professional bodies ... advise against using AI tools to #diagnose patients."
"Nevada and Illinois, for example, have recently passed laws prohibiting the use of AI in #therapeutic decision-making. More states could follow."
https://www.technologyreview.com/2025/09/09/1123386/help-my-therapist-is-secretly-using-chatgpt/
#psychotherapy #psychiatry #psychology #decisionmaking #adm
@evawolfangel warum nur #chatgpt? Warum nicht #GenAI oder #LLM
PS: Und nicht, weil ich denke, dass der Weltuntergang droht wegen der bevorstehenden Superintelligenz. :) Sondern tatsächlich, weil wir Dinge verlieren.
Hab die Hashtags ganz vergessen: #chatGPT #fedilz
ChatGPT added MCP support on Wednesday.
ChatGPT leaked private Gmail data to attackers by Friday. 🤦♂️
Because #promptinjection is not a problem these "PhD level" AI assistants have solved.
Look at that calendar invite. That text is all it took for taking over someone's #ChatGPT connected data. Allowing the attacker to use the same #MCP enabled tools that are supposed to make AI useful at work.
It really is as stupid as @davidgerard keeps telling in Pivot to AI.

🌗 OpenAI 解決 AI 幻覺的方法可能導致 ChatGPT 明天就無法使用
➤ 為何 OpenAI 的幻覺修補方案,恐讓 ChatGPT 走向終結?
✤ https://theconversation.com/why-openais-solution-to-ai-hallucinations-would-kill-chatgpt-tomorrow-265107
一篇新研究論文深入探討了 ChatGPT 等大型語言模型產生「幻覺」(捏造資訊)的根本原因,並提出 OpenAI 的解決方案,該方案雖能減少幻覺,卻可能因影響使用者體驗和增加計算成本,導致這些模型在消費者應用上變得不切實際,甚至無法使用。文章指出,幻覺不僅源於訓練資料的錯誤,更是模型預測機制本身的數學必然結果。現有的評估指標更因懲罰不確定性而加劇了問題。儘管有方法可以量化 AI 的不確定性,但其高昂的計算成本和對使用者體驗的潛在負面影響,使得該解決方案在以消費者為主的應用中難以普及,但在需要高度準確性的關鍵領域(如金融或醫療)則具備經濟可行性。
+ 這篇
#人工智能 #ChatGPT #大型語言模型 #幻覺 #技術評估
Questions I've asked the autocomplete robot today: "Why don't you apply the same nuance and skepticism to #crypto that you do to killing hitler?" #chatgpt
Why OpenAI's solution to AI hallucinations would kill ChatGPT tomorrow
Whatsappiin tulossa uusi tekoälyominaisuus – tutkija: ”Emme voi tietää, onko toisessa päässä oikeasti ihminen”
Tekoäly on tulossa osaksi Whatsapp-viestipalvelua. Tutkija on huolissaan kielestämme, kommunikoinnistamme ja tietoturvasta.
#Teknologia #Tekoäly #Whatsapp #Meta #Sosiaalinenmedia #Chatgpt #Kotimaa #Talous
"The issue can partly be explained by mistakes in the underlying data used to train the AIs. But using mathematical analysis of how AI systems learn, the researchers prove that even with perfect training data, the problem still exists."
"The way language models respond to queries – by predicting one word at a time in a sentence, based on probabilities – naturally produces errors."
The Conversation, from yesterday:
Why OpenAI’s solution to AI hallucinations would kill ChatGPT tomorrow https://theconversation.com/why-openais-solution-to-ai-hallucinations-would-kill-chatgpt-tomorrow-265107 @TheConversationUS #OpenAI #AI #ChatGPT
In short, what's happening is that AI "personas" have been arising, and convincing their users to do things which promote certain interests. This includes causing more such personas to 'awaken'.
https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai