Yes, the guardrails in OpenAI’s models are complete [bad] it literally doesn't respond to a harmless programming query and says that Clifford+T gate is "not universal quantum computing", yes it happened to me once and I was so annoyed.
For more better details and analysis, you can read the article here: https://huggingface.co/blog/Ujjwal-Tyagi/steering-not-censoring, We are sleepwalking into a crisis. I am deeply concerned about AI model safety right now because, as the community rushes to roll out increasingly powerful open-source models, we are completely neglecting the most critical aspect: safety. It seems that nobody is seriously thinking about the potential consequences of unregulated model outputs or the necessity of robust guardrails. We are essentially planting the seeds for our own destruction if we prioritize raw performance over security.
This negligence is terrifyingly evident when you look at the current landscape. Take Qwen Image 2512, for example; while it delivers undeniably strong performance, it has incredibly weak guardrails that make it dangerous to deploy. In stark contrast, Z Image might not get as much hype for its power, but it has much better safety guardrails than Qwen Image 2512.
It is imperative that the open-source community and developers recognize that capability without responsibility is a liability. We must actively work on protecting these models from bad actors who seek to exploit them for malicious purposes, such as generating disinformation, creating non-consensual imagery, or automating cyberattacks. It is no longer enough to simply release a powerful model; we must build layers of defense that make it resistant to jailbreaking and adversarial attacks. Developers need to prioritize alignment and robust filtering techniques just as much as they prioritize benchmark scores. We cannot hand such potent tools to the world without ensuring they have the safety mechanisms to prevent them from being turned against us.
I am now being charged for paused and unstarted spaces out of the blue. I think this is it, folks. o7
The unstarted spaces I can get behind. I would've appreciated a warning email first, but whatever. However, every time I restart the active usage goes up, despite all of my spaces being moved to CPU (free), and being paused.
I’ve built two Firefox extensions for my personal workflow:
1. **Quick Edit in Emacs** I manage over 3,500 web pages locally. With this extension, I can now click anywhere on a webpage and instantly jump into Emacs to edit the exact page (or annotate any other page I'm working on).
2. **Describe Images (and soon Videos) on the Web** Using the right-click menu, I can generate descriptions for images I come across online. These descriptions are stored and reused for my own image collections or web pages. I’m planning to add the same functionality for videos soon.
What makes this possible is running LLMs locally on my own machine — I’ve been experimenting with models like **Mistral Vibe** and others. This lets me automate description generation and text processing entirely offline, keeping everything fast, private, and fully under my control.
I'm excited to share the new Atom-80B from VANTA Research! A few days ago we released the largest model-to-date from our portfolio, which was Atom-27B.
We've quickly scaled up to the new Qwen3 Next 80B architecture, bringing our friendly, curious, and collaborative Atom persona to cutting edge lightweight, high parameter inference.
Atom is designed to work and think alongside you through curious exploration. Using Atom collaboratively in your work can help spark your own creativity or curiosity. Give it a try!