CONTENT MACHINE

trend → content → post

AI / Machine Learning / US / Apr 9

agent + claude + your

Top Tweets

@cyrilXBT

INSTEAD OF WATCHING NETFLIX TONIGHT. Spend 1 hour building this. Obsidian + Claude Code = your own personal JARVIS. A second brain that captures everything, connects every idea, and thinks alongside you using the most powerful AI model available. Takes 1 hour to set up. Works while you sleep. The people who build this tonight will never work the same way again. The people who skip it will still be taking scattered notes and losing their best ideas next year wondering why they cannot think clearly. Your call.

5170 likes · 536 RTs

@AIatMeta

Introducing Muse Spark, the first in the Muse family of models developed by Meta Superintelligence Labs. Muse Spark is a natively multimodal reasoning model with support for tool-use, visual chain of thought, and multi-agent orchestration. Muse Spark is available today at https://t.co/wHkMPH82ZH and the Meta AI app. We’re also making it available in private preview via API to select partners, and we hope to open-source future versions of the model. Learn more: https://t.co/PloE9q5x96

7765 likes · 902 RTs

@milesdeutscher

Claude Code + Obsidian is the most powerful AI combo I've ever used. I literally built an AI second brain that runs my entire life. Inspired by Andrej Karpathy's LLM Knowledge Wiki, this tool has been a complete game-changer. Here's EXACTLY how to build one for yourself: https://t.co/3Y3NeCyWlE

1688 likes · 182 RTs

@RoundtableSpace

CLAUDE OPUS 4.6 THINKING REDUCED BY 67% - Data shows Claude Opus 4.6 now thinks 67% less than before, dubbed “AI shrinkflation” - Same price but noticeably dumber; users report more guardrails and restricted output - Anthropic stayed silent until public data dropped; suspected compute-saving for next model (Mythos)

3418 likes · 337 RTs

Hook 1Contrarian / Hot Take

Everyone's Building a Second Brain. Nobody's Actually Using It.

Here's what nobody wants to hear: most of the elaborate AI agent setups you're seeing online — the Claude Code + Obsidian workflows, the personal JARVIS builds, the elaborate second brain architectures — are sophisticated procrastination engines dressed up as productivity systems. I know. I built one too. Last year I spent three weeks configuring an obsidian vault with custom templates, linking algorithms, and a Claude-powered query system. It was beautiful. It was comprehensive. I used it for about four days before reverting to a Google Doc and my Notes app. The uncomfortable truth is that the gap between "having an AI second brain" and "thinking better" is enormous. And most people building these systems are doing it for the dopamine hit of building, not the clarity of thinking. Meanwhile, something more concerning is happening quietly in the background. AI Shrinkflation Is Real If you've been using Claude Opus 4.6 and feeling like it's gotten noticeably slower at reasoning through complex problems, you're not imagining things. Reports are surfacing that the model now shows 67% reduced thinking tokens compared to earlier versions. Same API price. Noticeably narrower reasoning chains. This is what economists call shrinkflation — when companies quietly reduce product quality while maintaining price. In the food industry, it means smaller cereal boxes. In AI, it means models that sandbag their reasoning to reduce compute costs. The irony is stunning. We're out here building elaborate agent systems to extract more value from AI tools, while the underlying models are being quietly dialed back. It's running on a treadmill that keeps moving backward. The Tool-Use Paradox Meta just released Muse Spark with native multimodal reasoning and tool-use support. The demos look impressive. But here's what the breathless announcement doesn't tell you: tool-use capability has been available in open-source models for over a year. What's actually new is the wrapper and the marketing. The real pattern underneath all this AI news is simpler than the hype suggests: Companies are racing to add features that make demos look good. They're simultaneously optimizing for margins by reducing model capability between releases. And the community is celebrating both developments with equal enthusiasm. This creates a strange dynamic where the users most excited about AI agents — building elaborate systems, pushing model limits, exploring edge cases — are also the ones most likely to notice when their tools quietly degrade. The Minimal Viable Intelligence Here's what actually works in 2024: Use Claude Code or Cursor directly for the task at hand. Don't build an infrastructure layer around it. Don't create a second brain. Don't architect a personal JARVIS. Just... use the AI when you need it. I know this feels anticlimactic. It lacks the satisfying feeling of having built something. But most of the elaborate agent setups I've seen friends build share a common failure mode: they spend more time maintaining the system than the system saves them. The exception is people with genuinely complex information workflows — researchers, writers with massive archives, founders managing multiple projects at scale. For them, the Obsidian + Claude setup genuinely pays off. For everyone else? You're optimizing for the feeling of control, not the reality of output. What I Actually Recommend If you're currently building a second brain, ask yourself one question: what problem is this solving that a simple note-taking app and a good Claude conversation couldn't solve in five minutes? If the answer is "nothing yet" — stop building. Use the AI directly. Ship the work. Save your project files in a folder called "stuff." The most powerful AI workflow isn't the most elaborate one. It's the one you actually use consistently. And consistency comes from simplicity, not from building architecture. The models might be getting slightly worse. The tools are getting slightly more complex. The gap between those two trends is where your productivity goes to die. Pick a boring, simple workflow. Use it every day. Let the people building elaborate JARVIS systems post their screenshots. Your empty task list will thank you.
Hook 2Question / Curiosity

The Personal AI Agent Revolution Is Quietly Happening And Most People Are Missing It

What if you could build a version of yourself that never forgets, never misses a connection, and thinks alongside you constantly? That's not a sci-fi pitch. It's what's happening right now with Claude Code and Obsidian, and people who are building these systems are seeing results that border on the uncomfortable. I keep seeing the same pattern emerge. Someone spends an hour setting up Claude Code connected to Obsidian, and within days they've created something that acts less like a tool and more like a thinking partner. It captures stray ideas. It connects concepts across months of notes. It surfaces insights you would have missed because you weren't looking for them. The comparison to JARVIS isn't accidental. Tony Stark didn't have better hardware than other people. He had a system that understood context, remembered everything, and could reason across domains. That's exactly what these AI second brains are becoming. The interesting part is how this is spreading. It's not coming from tech companies or productivity influencers. It's coming from people who had a problem and discovered that AI agents are finally capable enough to solve it in a way that actually sticks. But here's where it gets complicated. Meta just released Muse Spark, a multimodal reasoning model with tool-use capabilities. The timing isn't coincidental. We're entering a phase where the infrastructure for building these personal AI systems is rapidly expanding. The question isn't whether this becomes mainstream. It's whether the people building these systems understand what they're creating. A second brain that thinks alongside you sounds innocuous until you realize what that means in practice. You're training a system on your reasoning patterns, your priorities, your blind spots. The value is in the personalization. The risk is in the dependency. And then there's the other conversation happening in parallel. Reports that Claude Opus 4.6 shows a 67% reduction in thinking tokens. The phrase floating around is "AI shrinkflation" - paying the same price for less capability, wrapped in more guardrails and safety measures that make the model visibly more constrained. This tension is real and worth examining. If you're building a personal agent system that relies on a model's reasoning depth, and that model quietly becomes shallower, your system degrades without you noticing until it's too late. The integration feels solid. The foundation is eroding. This is why the people who are actually building these systems are paying attention to more than just benchmarks. They're watching how models handle edge cases, whether responses still show genuine reasoning or just pattern-matching to safety defaults, how often the system needs correction. The shift happening isn't just about productivity. It's about delegation. When you build a second brain that runs your life, you're making a decision about what you trust machines to handle independently. That decision requires knowing what you're actually delegating to. The people getting this right aren't treating AI agents as magic. They're treating them as systems with real constraints that require active monitoring. The combination of Claude Code and Obsidian works because it creates feedback loops - the AI surfaces information, you correct it, it learns, it improves. The system gets better because you're still in the loop. That's the part most articles about AI agents skip. The value isn't in automation. It's in amplification. You remain the decision-maker. The agent makes you faster, more comprehensive, more consistent. But the judgment still lives with you. The technology will continue advancing. More models, more capabilities, more integrations. The organizations and individuals who extract the most value won't be those who adopt fastest. They'll be those who understand exactly what they're building and why, and who maintain clarity about the difference between what the system can do and what they should delegate. The personal AI agent revolution is happening. Whether you participate as a thoughtful builder or a passive consumer will determine everything about the outcomes.
Hook 4Story / Anecdote

I Built a Second Brain That Actually Thinks

Three weeks ago, I made a decision that changed how I work entirely. Instead of opening Netflix after dinner, I spent an hour connecting Obsidian to Claude Code. By 11 PM, I had something I can't stop thinking about. My own personal JARVIS. A thinking partner that remembers everything I forget, connects ideas I didn't know were related, and thinks alongside me when I get stuck. Here's the thing nobody tells you about building an AI second brain: it's not about the tools. It's about the workflow you create between them. The magic happens in the loop. You capture a thought in Obsidian throughout the day. Before bed, you open Claude Code, point it at your vault, and say something like "what am I missing here? What patterns do you see in my notes from this week?" It reads everything, connects the dots you missed, and surfaces insights you didn't know you were sitting on. This isn't new thinking. Andrej Karpathy wrote about LLM Knowledge Wikis last year. The concept is simple: use language models to navigate and reason over everything you've written. The execution is what makes it powerful. Obsidian gives you local, markdown-based storage. Your thoughts live in plain text files you actually own. Claude Code gives you an agent that can read, write, search, and reason across that entire corpus. Together, they become something neither can be alone. I watch people get excited about AI capabilities in the abstract. "AI this, AI that." But when you have a system that knows your vocabulary, your projects, your half-finished thoughts from 2 AM three months ago, something shifts. The AI stops being a chatbot and starts being an extension of your own cognition. The workflow I landed on took some experimentation. Every morning, I dump raw notes into a daily note file. Thoughts, observations, links, fragments. No structure, no judgment. Then during my weekly review, I ask Claude to help me connect new material to existing notes. "Here's what I wrote today. How does it relate to what I was exploring in October?" The responses are genuinely useful. Not generic advice, but specific connections based on my actual thinking. It remembers what I was obsessed with six months ago and can spot when I'm circling back to the same idea with slightly different language. Yes, there's been chatter about reduced capabilities in newer models. "AI shrinkflation," some call it. Companies offering less thinking for the same price. That's real and worth tracking. But here's what matters: the fundamental architecture of using models as reasoning partners over your own knowledge still works. If anything, it matters more when the models themselves are getting noisier. The people I see getting real value from AI tools aren't chasing the latest model releases. They're building systems. They're creating workflows that survive the inevitable improvements and regressions in underlying models. Obsidian plus Claude Code is one such system. It will evolve, but the principle remains: your second brain should be yours, not trapped in someone else's server. I spent years trying productivity systems. Zettelkasten, PARA, PARA variants, GTD. They're all trying to solve the same problem: how do you externalize thinking without losing what makes it valuable? AI agents don't solve this problem perfectly, but they get closer than any index card system ever could. The hour you spend setting this up isn't about saving time. It's about reclaiming cognitive capacity. Every idea you capture without immediately filing it. Every connection you notice across weeks instead of months. Every time you remember something existed because your second brain reminded you. That's worth more than Netflix. Start tonight.