chatgpt

Back Open Paginator
23.10.2025 16:38
elinext (@elinext@mastodon.de)

SAP + ChatGPT-5 = Next-Level KI im Unternehmen!

Automatisierte Workflows und smarte Analysen warten – Details im neuen Artikel. 🌐🤝

elinext.de/blog/chatgpt-5-inte

#AI #SAP #ChatGPTIntegration #ChatGPT




Show Original Post


23.10.2025 16:12
habr (@habr@zhub.link)

Нейро-дайджест: ключевые события мира AI за 4-ю неделю октября 2025

Привет! Это новый выпуск «Нейро-дайджеста» — коротких и полезных обзоров ключевых событий в мире искусственного интеллекта и технологий. Неделя выдалась интересной: Anthropic выкатили лёгкую Claude Haiku 4.5, Suno 4.5 открыли бесплатно, Microsoft включила голосового агента в Windows 11, а OpenAI показали ИИ-браузер ChatGPT Atlas, пока ИИ уже пишет половину текстов в сети. Всё самое важное — в одном месте. Поехали! Читать дайджест →

habr.com/ru/companies/timeweb/

#ии #нейросети #генеративный_ИИ #LLM #Claude #chatgpt #Qwen #Suno #Copilot #timeweb_дайджест




Show Original Post


23.10.2025 16:12
dustcircle (@dustcircle@mastodon.social)

bsky.app/profile/dustcircle.bs




Show Original Post


23.10.2025 15:38
sanjay_ankur (@sanjay_ankur@fosstodon.org)

#ChatGPT is bullshit | Ethics and Information Technology - link.springer.com/article/10.1

#AcademicChatter #LLMs #GenerativeAI #AI #AIEthics




Show Original Post


23.10.2025 15:35
ViolettaOleander (@ViolettaOleander@mastodon.social)

Wenn die KI-Dinge so viel Strom verbrauchen, warum nutzt man das dann für jeden Scheiss?

Muss ich KI nutzen, um zu fragen, was ich vorgestern gegoogelt habe? -
Wird mir auf jeden Fall ständig in irgendwelchen Podcasts etc. nahegelegt als interessant und wichtig.

Oder Bildchen und Videos mit sprechenden Katzen?




Show Original Post


23.10.2025 15:28
eickertv (@eickertv@mastodon.social)

ChatGPT erlaubt erotische Inhalte und die Welt dreht durch.

OpenAI lockert Grenzen 🔥 Nach massiver Kritik erklärte CEO , sei „nicht die moralische Polizei der Welt“ – und will Erwachsenen künftig auch in erlauben.

Erwachsene wie Erwachsene behandeln 🧠 Altman betonte, OpenAI wolle „erwachsene Nutzer wie behandeln“.

Zwischen Freiheit und Verantwortung ⚖️ Gleichzeitig läuft eine FTC-Untersuchung zu ChatGPTs Einfluss auf Minderjährige.





Show Original Post


23.10.2025 15:24
justincrozer (@justincrozer@mastodon.social)

If an AI agent mode in a web browser can't solve CAPTCHAs then it's not really an ‘AI Agent‘ now is it!?




Show Original Post


23.10.2025 15:01
Eshahaber (@Eshahaber@mastodon.social)

WhatsApp'ta bir devir bitiyor! Sona ereceği tarih resmen açıklandı: Teknoloji dünyasında dikkat çeken bir gelişme yaşandı. OpenAI, popüler yapay zeka sohbet botu ChatGPT’nin artık WhatsApp platformunda bundan böyle kullanılamayacağını resmen duyurdu.

Kanser riskini ciddi oranda azaltıyor! Düzenli kullananlar yaşadı

15 OCAK 2026’da RESMEN SONA ERİYOR

Yapılan açıklamalar doğrultusunda, ChatGPT’nin… eshahaber.com.tr/haber/whatsap EshaHaber.com.tr





Show Original Post


23.10.2025 14:43
2025 (@2025@civic.io)

Maybe We Shouldn’t Call Them AI “Agents”

Beware of pretty faces that you find. A pretty face can hide an evil mind.
– Johnny Rivers, Secret Agent Man

As artificial intelligence capabilities expand into government service delivery, it’s worth pausing to think carefully about the language we’re using. The terms “agentic services” and “agentic AI” have gained significant traction in the tech industry, and for good reason — it captures something important about AI systems that can act autonomously. I myself am as guilty as anyone of using this term frequently. But for those of us working in government contexts, there are some considerations worth keeping in mind.

The “Agent” Problem in Government

In government, the word “agent” carries particular connotations. FBI agents. Border patrol agents. IRS agents. These are enforcement and investigative roles. When citizens hear “government agent,” they often think of authority, compliance, and oversight — not helpful service delivery.

This isn’t an insurmountable problem, but it’s worth being aware of. The language we choose shapes how citizens perceive and respond to new service models. If we’re trying to build trust in AI-enabled services, starting with terminology that might trigger concerns about surveillance or enforcement may not be ideal.

(And yes, for a certain generation, The Matrix movies didn’t exactly help the cultural perception of “agents” either. 😅)

What the term “agents” might obscure

There’s a deeper consideration beyond just the word “agent” itself. Calling these services “agentic” can make them sound radically new — a complete departure enabled by cutting-edge AI. But that framing might obscure an important reality.

Delegation-based government services aren’t new. They’ve existed for decades, and are extremely common today.

Tax preparers handle filing returns on behalf of clients. Immigration attorneys navigate visa applications. Customs brokers manage import/export documentation for businesses. Permit expediters guide building approval processes. Benefits navigators help people apply for disability or veterans services.

These are all delegation relationships. Citizens hand over complex, high-stakes government interactions to trusted specialists who handle the administrative burden on their behalf. AI doesn’t enable this service delivery paradigm, but it does potentially make it more scalable and affordable.

Why Words Matter

Thinking about these services as “delegation-based” rather than simply “agentic” opens up useful design questions.

When you frame it as delegation, you can look to existing delegation relationships for guidance. What makes someone comfortable delegating their tax filing to a CPA? What trust factors matter when hiring an immigration attorney? These aren’t abstract questions — there are decades of real-world answers.

The language of delegation also centers the citizen experience more clearly. It’s not about what the AI can do autonomously; it’s about what citizens are willing to hand over and under what conditions. That subtle shift in framing can lead to different design choices around transparency, control and oversight.

Moving Forward

This isn’t a call to abandon the term “agentic services” entirely. It’s widely used in industry, and there’s value in using common language when talking with technology partners and vendors.

But maybe for internal discussions, policy development, and especially citizen-facing communications, it might be worth experimenting with terms like “delegation-based services” or similar language. It acknowledges continuity with existing practices, avoids potentially problematic associations with “government agents,” and keeps the focus on what citizens are actually doing: choosing to delegate burdensome tasks while maintaining appropriate oversight and accountability.

The technology may be new, but the underlying service delivery paradigm isn’t. Our language should reflect that.

Note – this post originally appeared on GovLoop.

#agent #AI #artificialIntelligence #ChatGPT #serviceDelivery





Show Original Post


23.10.2025 14:34
technews (@technews@eicker.news)

#OpenAI’s #Atlas, a new #webbrowser with #ChatGPT integration, features #AgentMode, a “preview mode” that can automate web-based tasks. LAtlas’ Agent Mode got tested on various tasks, including playing games, creating playlists, scanning emails, editing wikis, building fan sites, and selecting power plans. While Atlas showed promise in automating some tasks, it faced limitations like session length constraints and ethical considerations. arstechnica.com/features/2025/ #tech #media #news




Show Original Post


23.10.2025 14:31
remixtures (@remixtures@tldr.nettime.org)

"WIRED sent a public record request to the FTC requesting all complaints mentioning ChatGPT since the tool launched in November 2022. The tool represents more than 50 percent of the market for AI chatbots globally. In response, WIRED received 200 complaints submitted between January 25, 2023, and August 12, 2025, when WIRED filed the request.

Most people had ordinary complaints: They couldn’t figure out how to cancel their ChatGPT subscriptions or were frustrated when the chatbot didn’t produce satisfactory essays or rap lyrics when prompted. But a handful of other people, who varied in age and geographical location in the US, had far more serious allegations of psychological harm. The complaints were all filed between March and August of 2025.

In recent months, there has been a growing number of documented incidents of so-called AI psychosis in which interactions with generative AI chatbots, like ChatGPT or Google Gemini, appear to induce or worsen a user’s delusions or other mental health issues."

wired.com/story/ftc-complaints

#AI #GenerativeAI #MentalHealth #Psychosis #ChatGPT #Delusions




Show Original Post


23.10.2025 14:13
onokoto (@onokoto@mastodon.social)

G00D F0R Y0U SΛMMY !!1!~~~~





Show Original Post


1 ...1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 ...1657
UP