In 1998, I walked into a telecom company in Rio de Janeiro as a trainee and sat down in front of a Sun workstation running Solaris. Java was barely three years old. The web was a curiosity, not a platform. Server-side rendering meant CGI scripts. Nobody had heard of "microservices" because everything was a monolith and nobody felt the need to name it.

I was twenty years old and I had no idea what I was doing.

The first thing I learned was that production systems are unforgiving. A billing system that crashes at 2 AM doesn't wait for you to finish your coffee. A database migration that corrupts call records doesn't care that it worked fine in staging. The gap between "it compiles" and "it runs reliably at scale, every day, for years" is vast, and that gap is where engineering actually lives.

Twenty-eight years later, that lesson hasn't changed. Everything else has.

This is the story of those twenty-eight years. Not a resume walkthrough — a reflection on what I learned, what I got wrong, and what I'd tell my younger self if I could.

The early years: widening the lens (1998–2006)

The telecom company was formative in the way first jobs always are. I learned Java by building real systems that real people depended on. I learned SQL by writing queries against databases with millions of rows — which felt enormous at the time and laughably small in retrospect. I learned that the hardest bugs aren't the ones that crash your system; they're the ones that silently produce wrong answers for weeks before anyone notices.

From there I moved to a research institute, where the problems were completely different. Academic computing. Algorithms. Mathematical rigor applied to code. The institute taught me that understanding the underlying theory matters — not because you'll implement a red-black tree from scratch in production, but because knowing why a hash map is O(1) amortized changes how you think about system design.

Then came Germany. A role at an international telecom software company, building systems that needed to work across countries, languages, time zones, and regulatory frameworks. This was my first real experience with internationalization — not just translating strings, but designing systems that accommodate fundamentally different business rules depending on geography. A phone number in Brazil doesn't look like a phone number in Germany. A billing cycle in one country has legal constraints that don't exist in another.

After Germany, a stint at a global consulting firm. The consulting years taught me something I didn't expect: the difference between building systems and advising people who build systems. Both are valuable. But they use different muscles, and confusing the two is a common mistake.

The key lesson from those early years: the fundamentals never go out of style. Data structures, system design, debugging methodology, writing clear code. Languages change. Frameworks change. Cloud providers rise and fall. But the ability to reason about a system, trace a bug to its root cause, and design something that handles edge cases gracefully — that never expires. Every year that passes, I'm more grateful for the time I spent learning fundamentals instead of chasing the hot framework of the moment.

The 28-Year Arc Telecom & Research 1998 – 2006 Media Scale 2007 – 2012 Leadership 2012 – 2017 Data Engineering 2017 – 2024 AI Integration 2024 – 2026 1998 2007 2012 2017 2024 Now Java, Solaris, fundamentals Millions of users, first scale Agency, consultancy, people problems Petabyte pipelines, data platforms MCP, agents, knowledge bases Each era built on the one before it

The media years: first taste of real scale (2007–2012)

Five-plus years at Brazil's largest media company changed how I thought about systems. Before that, "scale" was an abstract concept I'd read about. At the media company, it was the daily reality.

Millions of concurrent users. Live event traffic spikes that could 10x your baseline in minutes. Content delivery systems that served video to a country of 200 million people. This was the era where I learned, deeply and painfully, that scale changes everything.

What works for a thousand users doesn't just slow down at a million users — it fails in fundamentally different ways. Caching strategies that make sense at small scale become consistency nightmares at large scale. Database patterns that are elegant for low concurrency become bottlenecks under high concurrency. Monitoring that's optional for a small system is existential for a large one — if you can't see it, you can't fix it, and at scale, things break constantly.

The media company also taught me about organizational scale. Hundreds of engineers. Dozens of teams. The coordination overhead of getting twenty teams to agree on an API contract or a deployment schedule was often harder than the technical problem itself. I started to understand that the biggest engineering problems aren't technical — they're communication problems wearing a technical costume.

One memory stands out. We were building a new content management platform, and two teams had been working on overlapping features for three months without realizing it. By the time someone noticed, both teams had shipped to staging. The merge was brutal — not because the code was bad (both implementations were solid), but because nobody had drawn a system boundary on a whiteboard and said "you own this, we own that." A thirty-minute meeting at the start would have saved three months of rework.

That experience stayed with me. Technical architecture without clear ownership boundaries is just a diagram. Ownership makes it real.

Leadership and consulting: different muscles (2012–2017)

After the media company, I led a digital agency, founded my own consultancy, and took on consulting engagements across Brazil's tech scene. This was the period where I made the transition from "person who builds systems" to "person who leads the people who build systems" — and discovered, with some humility, that those are very different skills.

Being a good engineer doesn't make you a good leader. The skills barely overlap. As an engineer, you succeed by solving problems directly — you read the code, find the bug, fix the bug. As a leader, you succeed by creating the conditions where other people solve problems well. That means hiring, mentoring, prioritizing, unblocking, and making decisions with incomplete information. It means accepting that the team's solution might be different from what you would have built, and that's okay as long as it works.

The hardest part was letting go. I'd spent fifteen years getting good at solving technical problems directly. Suddenly, doing the work myself was the wrong move — my job was to make the team effective, not to be the hero who fixes everything at 2 AM. That transition took longer than I'd like to admit.

During this period, I consulted for a Brazilian e-commerce marketplace, and the hackathon story from that engagement taught me one of the most important lessons of my career. The company had a massive tech debt problem. Hundreds of open bugs. Uptime hovering around 92%. The engineering team was demoralized. Nobody wanted to work on bugs because the product roadmap was always more "urgent."

We organized a two-day hackathon focused entirely on bug squashing. Not a side project hackathon — a "let's fix as many bugs as we can" hackathon. The team squashed 70% of the outstanding bugs in two days. Uptime went from 92% to 99% in the following weeks. But the real lesson wasn't about hackathons. It was about permission. The team knew the bugs were important. They wanted to fix them. They just needed someone to say "yes, this is what we're doing for the next two days, the product roadmap can wait." The biggest engineering problems are often cultural, not technical. The code was never the hard part.

The data engineering chapter: the pivot (2017–2024)

Around 2017, I made a deliberate pivot into data engineering. It wasn't a random career move — it was a bet on where I thought the industry was going. Data was becoming the center of gravity for every serious technology company, and the people who could build reliable, scalable data systems were in short supply.

The pivot took me through some of the most intense engineering work of my career.

First, a fintech payments company. Building the data infrastructure for a company that moves money is a different kind of engineering. Every number has to be exactly right. "Eventually consistent" isn't acceptable when you're reconciling financial transactions. The lesson: correctness beats performance. A slow system that produces accurate numbers is infinitely more valuable than a fast system that's wrong by 0.1%.

Then, a top-10 ad tech platform. This is where petabyte scale became real. 1.5 billion records per day flowing through pipelines I helped design and maintain. Spark jobs processing terabytes in single runs. Cost optimization wasn't a nice-to-have — it was existential. When your monthly compute bill is in the hundreds of thousands, a 5% efficiency improvement pays for an engineer's salary. I reduced one major pipeline's cost by 40% through partition strategy optimization and careful Spark tuning. The lesson: the best pipeline is the one that's boring. Reliable. Predictable. Maintainable. Nobody calls you at 3 AM about a boring pipeline.

After ad tech, a global travel platform. Building data systems for a company that operates in dozens of countries, each with different data privacy laws, currency formats, and business rules. This was internationalization all over again — but now with data. A user in Europe has GDPR rights that a user in Brazil doesn't (Brazil has LGPD, which is similar but not identical). Your data platform needs to handle all of this without turning into a special-case nightmare.

Then came a logistics startup, where I built a complete data platform from zero. No existing infrastructure. No data team. Just a fast-growing company that needed to make data-driven decisions and had no way to do it. I designed the architecture, built the pipelines, set up the warehouse, created the dashboards, and then trained the team that would maintain it after I left. Starting from nothing taught me that the order in which you build things matters as much as what you build. Get the foundation wrong and everything built on top wobbles.

A major web services provider rounded out this era. Different scale, different challenges, same fundamentals. Every engagement reinforced the same pattern: the technical problems are solvable. The hard parts are understanding the business context, defining clear ownership, and building systems that humans can actually maintain.

The AI chapter: amplifying judgment (2024–2026)

The most recent chapter started in 2024, integrating AI agents into data workflows at a major B2B data platform. This is where everything I'd learned over 26 years converged with a technology wave that I believe will be as transformative as the internet was in the late 1990s.

The work involved Model Context Protocol (MCP), agentic workflows, guardrails for production AI systems, and the knowledge base pattern — giving AI agents a structured, persistent memory about your architecture and decisions so they can produce plans that are actually useful instead of technically plausible but architecturally naive.

Here's what I've learned so far about AI in production engineering:

AI doesn't replace engineers. It amplifies their judgment. An AI agent with access to your codebase can generate code faster than any human. But the code it generates is only as good as the context it has. Without understanding why your system is built the way it is, what constraints shaped the architecture, what failed in the past — the agent produces work that looks right but misses the point. The value isn't the AI itself. It's the context infrastructure you build around it.

This is why I've been writing about AI agents in data pipelines and building data platforms — because the foundational engineering work of structuring data, defining clear ownership, and documenting decisions becomes more important in an AI-augmented world, not less. AI makes good infrastructure more valuable and bad infrastructure more dangerous.

The teams that will benefit most from AI aren't the ones with the fanciest models. They're the ones with the best-organized knowledge, the clearest system boundaries, and the most discipline about documenting decisions. Twenty-eight years of building production systems taught me that infrastructure matters. AI makes that lesson ten times more urgent.

What stayed the same across 28 years

Some things don't change. After nearly three decades, four principles have held up through every technology wave, every company, every scale.

The Constants These held true from 1998 to 2026 — and will hold true beyond I Production is unforgiving What works in dev breaks in prod. Always has, always will. II Fundamentals don't expire Data structures, system design, debugging. The rest is fashion. III Hardest problems are human Communication, ownership, trust. Fix those and the code follows. IV Simplicity wins The clever solution impresses today. The simple one survives tomorrow.

Production is unforgiving. This was the first lesson in 1998 and it's still the first lesson in 2026. The gap between "works on my machine" and "works reliably in production, at scale, every day" is where all the real engineering happens. This hasn't changed with cloud, containers, Kubernetes, or AI. If anything, the blast radius of a production failure has gotten larger.

The fundamentals never expire. I've watched dozens of frameworks rise and fall. jQuery, Backbone, Angular 1, CoffeeScript, MapReduce — all were "essential" at some point, all are either dead or diminished. But binary search still works. Hash tables are still O(1). The CAP theorem still applies. TCP still does what TCP does. If you invest in fundamentals early, you can learn any framework in weeks. If you only learn frameworks, you're stuck re-learning from scratch every few years.

The hardest problems are people problems. Every technical challenge I've encountered over 28 years was eventually solvable with enough time and talent. But miscommunication between teams? Unclear ownership? Trust deficits? Organizational inertia? Those problems killed more projects than any technical limitation ever did. The hackathon story is the clearest example — the team had the skills, the code was fixable, the only thing missing was permission to focus on it.

Simplicity wins every time. Junior engineers build complex systems to prove they can. Senior engineers build simple systems because they've maintained the complex ones. The clever optimization that saves 3% of compute but requires a PhD to debug is almost never worth it. The boring, obvious, well-documented solution that any engineer on the team can understand and maintain — that's the one that survives. I've seen too many "elegant" architectures crumble under the weight of their own complexity.

What changed completely

What Changed Thousands of users, rows, requests Scale Billions 1.5B records/day, petabytes Bare Metal Racked servers, Solaris, colo Infrastructure Cloud Elastic, on-demand, global Manual Every line written by hand Tools AI-Augmented Agents, copilots, MCP Office Same city, same building Work Global Remote Async, distributed, worldwide The landscape today would be unrecognizable to the engineer I was in 1998

Scale went from thousands to billions. In 1998, a million rows in a database was a serious production system. Today I've built pipelines that process 1.5 billion records per day, and that's not even exceptional by modern standards. The tools, techniques, and mindset required at each scale are so different they're almost different professions. But the core skill — reasoning clearly about how data flows through a system — is the same at every scale.

Cloud changed everything about infrastructure. When I started, getting a new server meant a purchase order, a 6-week lead time, and a trip to the data center. Today I can spin up a cluster of a hundred machines in minutes, run a computation, and tear it down before lunch. This changed not just how we build systems, but what we imagine building. Entire categories of application that were unthinkable in the server-rack era are trivial in the cloud era.

AI is the biggest shift since the internet. I was there when the web went from curiosity to platform. I'm watching AI make the same transition now, and the parallels are striking. In 1998, people asked "but what would you actually use a website for?" Today, people ask "but what would you actually use an AI agent for?" The answer, in both cases, is: things you haven't imagined yet. The teams building the infrastructure now will define the categories later.

Remote work opened the world. For the first half of my career, working for an international company meant moving to another country. Today I work with teams across continents from my home office. This isn't just a convenience — it fundamentally changed who gets access to opportunity. A talented engineer in Recife can work for a company in San Francisco without leaving their family. That's a profound change, and we're still figuring out the implications.

What I wish I'd known earlier

If I could go back and give my 20-year-old self some advice — knowing he'd probably ignore most of it — here's what I'd say:

Specialize earlier. Breadth is good. I'm glad I did telecom, research, media, consulting, fintech, ad tech, and travel. Each domain taught me something the others couldn't. But depth is where the value is. The moment I specialized in data engineering, my career trajectory changed. I went from "good generalist" to "the person you call when you need to build a data platform that actually works." Breadth gave me perspective. Depth gave me leverage. I wish I'd found my depth sooner.

Write more, earlier. I started writing publicly very late. This blog exists because I finally realized that the compound interest on ideas only works if you put them out there. Every article I write clarifies my thinking. Every concept I explain forces me to understand it more deeply. Writing is not a side activity for engineers — it's a core skill. The best engineers I've worked with were all good writers. Not a coincidence.

Build in public sooner. Related to writing, but broader. Open-sourcing tools, sharing architectures, writing about failures. The network effect of sharing your work is enormous. People find you. Opportunities appear. Collaborators emerge. For years I built things in private, inside companies, behind NDAs. That work was valuable, but invisible. The work I've shared publicly in the last two years has generated more professional opportunities than the previous ten years of private work combined.

The best career move is solving real problems, not chasing titles. I wasted time early in my career thinking about titles. Senior engineer. Tech lead. Architect. Director. The title game is a distraction. The actual career accelerator is being the person who solves hard problems. Not the person with the impressive title, but the person people call when something is broken and needs to work. Solve enough hard problems and the titles follow — and by then, you've stopped caring about them.

Invest in relationships. The most valuable thing from 28 years isn't any technology I learned. It's the people I worked with. Former colleagues who became co-founders. Mentors who opened doors. Engineers I managed who later hired me as a consultant. Your network isn't a LinkedIn connection count — it's the set of people who know your work and trust your judgment. Build that deliberately.

Where it all leads: the stack I work with today

Everything I've learned over 28 years converges into the work I do now. Here's how the pieces fit together:

The Full Picture Data Platform Pipelines, warehouses, orchestration, monitoring — the systems Knowledge Base Architecture decisions, history, context MCP Live context, tool access, guardrails AI Agents Planning, code generation, analysis, automation 28 years of engineering judgment, structured for machines AI Agents are only as good as the context and infrastructure beneath them

The Data Platform is the foundation — the pipelines, warehouses, and orchestration that make data reliable and accessible. This is where 28 years of building production systems pays off most directly.

The Knowledge Base is the institutional memory — a structured, LLM-maintained wiki that captures architecture decisions, failure patterns, domain concepts, and system topology. Without this, AI agents are smart but architecturally naive.

MCP (Model Context Protocol) is the live context layer — the protocol that gives agents access to your tools, databases, and services with proper guardrails. It's the runtime complement to the knowledge base's compile-time context.

And AI Agents sit on top — integrated into data workflows where they can plan, generate, analyze, and automate. But they only work well because the layers beneath them are solid.

This stack isn't a theoretical framework. It's what I actually build for clients. And every layer draws on lessons from a different part of those 28 years. The data platform draws on the media-scale and ad-tech years. The knowledge base draws on the consulting years, where I learned how much institutional knowledge matters. MCP and the agent layer draw on the most recent work. The whole thing only makes sense because of the breadth of experience underneath it.

What's next

I'm building Mentges.AI. The thesis is simple: most teams need help integrating AI into their data workflows, and that help needs to come from someone who's actually built production data systems — not from someone who's read about them.

The work has three parts:

Consulting. Helping teams design and implement the full stack — data platforms, knowledge bases, agentic workflows, MCP integrations. Hands-on, embedded work. Not slide decks and strategy documents. Actual systems that ship to production.

Writing. This blog. Thinking in public about data engineering, AI integration, and the craft of building systems that last. Every article I write compounds into the next one. The knowledge base article informed the MCP article, which informed the agents-in-pipelines article, which informed this one. Writing is how I think.

Open source. The knowledge base template is the first piece. More tools and templates are coming — things I build for client work that are general enough to share. Open source is how I give back to the community that taught me everything I know.

If you had told the 20-year-old trainee at that telecom company in Rio de Janeiro that in 28 years he'd be building AI-augmented data platforms for companies across the world, from a home office, writing about it publicly, and open-sourcing the tools — he wouldn't have believed you. Not because it sounded too ambitious, but because most of it hadn't been invented yet.

That's the thing about a long career in technology. You don't just witness the changes. You build on them. Each era gives you tools and lessons that make the next era's challenges tractable. The telecom years taught me production discipline. The media years taught me scale. The consulting years taught me people. The data engineering years taught me the craft. And the AI era is teaching me that all of it was preparation for something bigger than any individual piece.

Twenty-eight years in, I'm more excited about engineering than I've ever been. Not because AI is flashy — it is, but that wears off. Because for the first time, the accumulated knowledge of an entire career can be structured, indexed, and made useful to machines. Every lesson I learned the hard way can now be encoded in a knowledge base that makes AI agents smarter. Every pattern I recognized across five companies can be documented in a template that helps the next person skip the mistakes I made.

The best time to start building was 1998. The second best time is now.

Want to work together?

28 years of building systems, now available to your team. Let's talk about what you're working on.

Book a Discovery Call View the KB Template