Back

AI optimized for spending

Viki Auslender
July 4, 2025
4 min read
  • TL;DR

    Pelanor is reimagining cloud cost management with AI-native FinOps tools that explain spending, not just track it. By rebuilding the data layer from scratch, we deliver true unit economics across complex multi-tenant environments - revealing what each customer, product, or team actually costs. Our AI vision is deeper: we're building systems that truly reason about infrastructure, learning what's normal for your environment and understanding why costs change, not just when.

AI is often hailed as the great equalizer - a tool that promises to transform any coder into a poet, designer into a journalist, and a curious amateur into film producer. Its entry into the job market has been particularly aggressive, despite sometimes failing to prove itself as an effective (let alone accurate) tool. 

Yet its presence (and enormous resource consumption) is undeniable. In cloud computing specifically, this raises a critical question: Do AI-powered cost management tools, especially those offered by hyperscalers, genuinely reduce expenses, or do they simply offer an appealing, automated justification for increased spending under the banner of "smart optimization"?

When cost-saving tools cost more

Cloud infrastructure operates on a pay-per-use model where every API call, data transfer, and status check incurs charges. When AI-powered FinOps tools enter the equation, they bring their own resource consumption. Each analysis and automated adjustment generates additional billable events.

This creates a fundamental conflict of interest. The tools marketed as cost-savers require significant cloud resources to operate, potentially inflating the very bills they promise to reduce.

What's positioned as smart optimization often ends up optimizing the provider's revenue more than your expenses

Take a closer look at the latest wave of AI-powered FinOps tools from the big three cloud providers: AWS's Cost Anomaly Detection and ML-powered forecasting in Cost Explorer, Azure's machine-learning-driven cost predictors, and GCP's increasingly aggressive Recommender engine. They showcase one consistent pattern: tools positioned as enablers of efficiency, wrapped in the language of savings and resource discipline, that simultaneously introduce new avenues for consumption, often invisibly so.

Need "anomaly detection" to spot unusual spending? While AWS offers this service for free, running it properly requires extra infrastructure that isn't free. The AI tools for autoscaling and forecasting need more compute power, storage space, and monitoring systems, all of which add to your bill. For instance, CloudWatch charges $0.30 per alarm when you want to track custom metrics.

As technology philosopher Langdon Winner once argued, systems are often designed to benefit those who create them more than those who use them. While cloud providers' FinOps tools can deliver real value in specific scenarios, like identifying idle resources or right-sizing instances - they also carry built-in business incentives that can inadvertently push organizations toward increased cloud consumption.

AI does not operate in a vacuum, or a philanthropic endeavor. It ravenously demands infrastructure, incessantly generates telemetry, ceaselessly invokes APIs, and ultimately, feeds the cloud provider’s insatiable revenue stream. This often happens in ways so opaque, so deeply nested within layers of abstraction, that the very teams relying on it for "cost management" remain unaware of the true consumption drivers.

Automation spending feedback loop

What we're observing across many engineering organizations is a growing, and often misplaced, faith in automated systems' ability to manage resources wisely. Teams eager to modernize and reduce manual work increasingly turn to AI for provisioning resources, managing scaling, and setting budget limits. But in doing so, they often lose sight of what these systems actually prioritize.

The uncomfortable truth is that AI systems typically optimize for the metrics they were trained on: uptime, performance, and reliability. These are important for customer satisfaction and meeting service agreements, but they're not the same as cost efficiency. As technology critic Evgeny Morozov has argued about Silicon Valley's "solutionism," complex problems like cost management get reduced to technical fixes that often serve the platform's interests more than the user's.

When critical resource decisions happen automatically, faster than any human can review, the risks go beyond simple overspending. Organizations can face runaway costs hidden behind dashboards that celebrate "optimization" and "efficiency" while obscuring where the money actually goes in these automated systems.

Your costs, their revenue 

This is where incentives clearly diverge. Cloud providers aren’t neutral service layers, they’re commercial platforms whose success depends directly on how much you use. Their tools may solve problems, but they often create new forms of dependency at the same time.

While it's undeniably true that their AI tools can solve legitimate operational challenges, it's equally true that they simultaneously create new ones, or at the very least, forge new, tighter dependencies that further entrench their indispensable role within your architectural stack.

AI workloads now represent one of the most lucrative slices of cloud revenue. GPU clusters, model training pipelines, inference endpoints, streaming vector databases, the very boundary between "AI as optimization" and "AI as consumption vector" has become terrifyingly blurred. 

AI workloads have become one of the most profitable segments of cloud revenue. Training pipelines, inference endpoints, streaming vector databases — the infrastructure behind generative models isn't just resource-intensive, it's revenue-intensive. According to Goldman Sachs Research generative AI could drive $200–300 billion in cloud spend. Google Cloud’s revenue jumped 35%, its fastest growth in two years, largely thanks to AI. Mordor Intelligence projects the cloud AI market to hit $89.4 billion by 2025, with annual growth exceeding 32%. The line between “AI as optimization” and “AI as consumption vector” has become increasingly blurred.

Consider OpenAI's infrastructure, built atop Azure and GCP. This illustrates the dual role of hyperscalers: they provide genuinely useful optimization tools while simultaneously benefiting from increased AI infrastructure consumption. It's a business model that works - but understanding this dynamic helps organizations make more informed decisions."

Human oversight matters

This isn't to say AI tools are inherently bad, they're powerful when used correctly. The key is understanding their limitations and maintaining active oversight. Smart organizations are finding ways to harness AI's benefits while avoiding its pitfalls.

AI-powered FinOps can deliver tangible savings in cloud spend. But, and this is crucial, those savings do not materialize automatically,

as if by magic. They only occur in organizations where dedicated, often hybrid or external, FinOps teams are actively validating, correcting, and intelligently interpreting what the AI recommends. The true value lies not in the algorithm itself, but in the human oversight that keeps it tethered to actual, measurable business goals.

In other words, the savings stem not from intelligence alone, but from intelligent supervision. When that critical human layer of oversight is absent, the very same AI tool that ostensibly "saved" your organization $50,000 one quarter can, with alarming speed, burn through twice that amount the very next.

Reclaiming control

The answer isn’t to vehemently reject AI, nor is it to blindly disable the optimization tools you've already integrated. The imperative is to understand, with absolute clarity and a healthy dose of strategic skepticism, that these tools are only as aligned with your business outcomes as you configure them to be. And that alignment doesn't emerge from a simple checkbox, or a friendly onboarding wizard. It demands unflinching visibility, robust policy, rigorous governance, and, above all, human accountability.

If you're embedding AI-powered features into your own stack (whether through LLM integrations, real-time image processing, or autonomous agents), the imperative is to precisely understand what these features truly cost to run, how they scale under load, and what fundamental architectural trade-offs you're making in exchange for perceived convenience.

Cloud vendors are not your adversaries, but let's be unequivocally clear: they are not your fiduciaries either. Their AI is undeniably smart, their dashboards are persuasively slick, and their value propositions are compelling. But they ultimately win when you consume more, not necessarily when you consume better.

So, the next time an AI engine purrs a recommendation that promises to "optimize," take a deliberate breath. Then, ask the one question that slices through the marketing narrative and gets to the core of the matter: Is this truly a path to less spending, or merely a more sophisticated, more automated way to spend more?