socialmedia

Back Open Paginator
24.09.2025 14:23
posts (@posts@killbait.com)

Influencer Rejects Borussia Dortmund's Apology After Mocking Video of Her Stammer

Jessie Yendle, a social media influencer, has rejected an apology from Borussia Dortmund after the German football club posted a TikTok video that mocked her speech impediment. The video showed Yendle struggling with a stammer, and while the clip was intended to be humorous, Yendle expressed feeling... [More info]





Show Original Post


24.09.2025 14:15
jos1264 (@jos1264@social.skynetcloud.site)

Bluesky seems to be having trouble theverge.com/news/784442/blues #SocialMedia #Tech




Show Original Post


24.09.2025 14:00
theverge (@theverge@c.im)

Bluesky seems to be having trouble thever.ge/2heW #SocialMedia #Tech




Show Original Post


24.09.2025 13:48
henninguhle (@henninguhle@social.tchncs.de)

Ich glaube, ich werde mehr auf meinem Blog machen, als in irgendeinem #SocialMedia. Und ja, es sind alle gemeint. Der #UhleBlog ist mir halt wichtiger als irgendeine Plattform.

henning-uhle.eu/informatik/wor




Show Original Post


24.09.2025 13:42
dawid (@dawid@vebinet.com)

AI-driven content moderation is not the right move.

In recent months, a growing number of social media platforms owned by big tech firms have announced plans to replace—or dramatically reduce—their human moderation teams in favor of artificial‑intelligence solutions. On the surface, this shift appears logical: AI can process massive volumes of content at a fraction of the cost associated with a staffed moderation workforce, promising faster response times and reduced operational expenses.

However, relying primarily on algorithms for content moderation carries significant risks that outweigh the financial benefits.

Algorithmic bias
Machine‑learning models learn from the data they are fed. If that data reflects existing societal prejudices or the platform’s own historical enforcement patterns, the AI will inevitably replicate those biases. This can lead to disproportionate removal of speech from marginalized groups, uneven enforcement of community standards, and a loss of trust among users.

Lack of contextual approach
Human moderators bring cultural awareness, contextual understanding, and empathy to the decision‑making process—qualities that are extremely difficult for AI to emulate. Sarcasm, satire, regional dialects, and evolving slang often confound automated systems, resulting in false positives (over‑blocking legitimate content) or false negatives (allowing harmful material to slip through).

Accountability and transparency
When a human moderator makes a decision, there is a clear line of responsibility and the possibility of appeal or review. With opaque AI models, it becomes challenging to trace why a particular piece of content was flagged or removed, complicating efforts to provide transparent explanations to affected users.

Ethical issues
Delegating the gatekeeping of public discourse to machines raises profound ethical questions about who controls the flow of information and how power is distributed. Maintaining a human presence in moderation safeguards democratic values by ensuring that nuanced judgment—not solely code—guides content policy enforcement.

Evolving threat landscape
Malicious actors continuously adapt their tactics to evade detection. Human moderators can spot emerging trends, coordinated disinformation campaigns, or novel forms of harassment that AI, trained on past data, might miss. A hybrid approach ensures that new threats are identified and incorporated into future model training.

While AI can certainly augment moderation workflows—by flagging obvious violations, prioritizing high‑risk content for review, and handling repetitive tasks—it should not replace the essential human element. A balanced, hybrid moderation strategy that leverages the speed and scalability of AI while preserving human oversight is crucial to protect users from bias, ensure fairness, and uphold the integrity of online communities.
#socialmedia #moderation #AI #contentmoderation #bigtech #artificialintelligence




Show Original Post


24.09.2025 13:33
ambientdread (@ambientdread@toot.io)

Fish Head Dread

#SocialSplatter #Sundries #Poems #posts #SocialMedia #YesterdaysRiff #writingcommunity #blogging #haiku
#tanka #TankaTuesday

ambientdread.net/2025/09/24/se




Show Original Post


24.09.2025 13:29
dustcircle (@dustcircle@mastodon.social)

How Are Getting Ready For The Apparent Return of This Week;
Thanks to a viral video, many on were led to believe that the was taking place -24. But if you’re reading this, it may be too late for you!

theroot.com/jesus-is-apparentl




Show Original Post


24.09.2025 13:20
Raptosys (@Raptosys@tooter.social)

Une bonne raison de supprimer son compte Linkedin, n'en déplaise a France travail ou autres.

clubic.com/actualite-580495-su

#socialmedia #emploi #ai




Show Original Post


24.09.2025 13:00
aibay (@aibay@mastodon.social)

🚀 "AI Valley apre nuovi orizzonti: gli ambienti virtuali saranno il futuro del nostro mondo digitale! "

🔗 aibay.it/notizie/ai-valley-pun




Show Original Post


24.09.2025 12:47
NieuwsJunkies (@NieuwsJunkies@mastodon.social)

📰 Satirisch Disney-filmpje Lubach passeert 50 miljoen kijkers

nieuwsjunkies.nl/artikel/1iFw

🕧 12:40 | RTL Nieuws
🔸




Show Original Post


24.09.2025 12:43
lindseygamble_ (@lindseygamble_@flipboard.com)

How to use data to optimize creator briefs, selection, and content to drive results with creator and influencer marketing. #socialmedia #influencermarketing

lindseygamble.beehiiv.com/p/wh

Posted into Creator Economy, Influencer Marketing & Social Media Updates @creator-economy-influencer-marketing-social-media-updates-LindseyGamble_




Show Original Post


24.09.2025 12:31
ciferecigo (@ciferecigo@mastodon.social)

n8n hat unser Social Media Management effizienter gemacht, aber die Integration von Instagram und TikTok könnte noch besser sein. Mehr dazu ⬇️

ciferecigo.substack.com/p/effi




Show Original Post


1 ...2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 ...2700
UP