I spent three months debugging why our Lambda functions were suddenly timing out at 11 PM every evening. Three months. It turned out to be a network configuration issue nobody warned us about in the tutorials. That's when I realized most "serverless" content glosses over the messy reality of what happens when you remove servers from the equation but inherit a entirely new set of problems.
Let's talk about serverless without the marketing BS.
What Serverless Actually Means
Serverless doesn't mean there are no servers. Obviously servers still exist somewhere in an AWS data center running your code. What it really means is you stop managing infrastructure and start paying for execution time. You write functions, upload them, and the platform handles scaling, deployment, and the underlying compute resources.
The mental shift is profound. With traditional servers, you provision capacity upfront based on predictions. You pay for those servers whether you use them or not. With serverless, you pay millisecond by millisecond. AWS Lambda charges are calculated in GB-seconds: if your function runs for 100ms using 512MB of memory, that's 0.05 GB-seconds, billed accordingly.
For the first time in my career, I could build infrastructure that costs literally nothing when nobody uses it. That's worth thinking about.
The Real Trade-Offs Nobody Discusses
Here's what experienced practitioners know but rarely say out loud: serverless is dramatically better for certain workloads and genuinely terrible for others.
Cold starts are the classic complaint. When your Lambda hasn't run in a while, AWS needs to provision a container, load your code, and start it. This adds latency—anywhere from 100ms to several seconds depending on your runtime, function size, and memory allocation. For a background job running once an hour, who cares? For a real-time API, this is unacceptable. We worked around this on a Vietnamese fintech project by keeping functions warm with synthetic invocations, which costs money and feels hacky. It is hacky.
The timeout limit is another constraint that bites people. Lambda functions have a maximum execution time of 15 minutes. Period. You cannot change this. Some workloads are fundamentally incompatible with this constraint. If you're doing heavy data processing that takes 30 minutes, you need a different approach—either split it into smaller functions or use ECS/EC2.
Share this post
Related Posts
Need technology consulting?
The Idflow team is always ready to support your digital transformation journey.
Debugging and monitoring are harder. With a server, you SSH in and look around. With Lambda, you're dependent on CloudWatch logs and distributed tracing tools like X-Ray. If something goes wrong in production at 2 AM, you're fighting with structured logging and custom metrics you should have set up months earlier. I've seen teams avoid serverless entirely because their legacy monitoring infrastructure doesn't integrate well.
Vietnam Market Considerations
Vietnam's cloud infrastructure costs are not the same as US costs. The pricing advantage of serverless—pay only for what you use—is even more compelling in markets where infrastructure budgets are tighter. A Vietnamese e-commerce platform we worked with built their checkout flow entirely on Lambda + API Gateway + DynamoDB. During off-peak hours, their infrastructure cost dropped to under $5 per day. During peak Tet shopping season, it scaled automatically to handle 50x traffic without any intervention.
The operational cost is low, but you need someone who understands these services. That's often harder to find than a traditional backend engineer. The Vietnamese tech market is improving, but AWS certifications are still less common than traditional server experience.
The Surprising Winner: Operational Simplicity
Where serverless genuinely wins is operational simplicity when you have the right problem. You don't patch operating systems. You don't manage security groups or SSL certificates at the infrastructure level. You don't keep database connections open. You don't worry about traffic spikes that exceed your capacity.
A startup we advised built their initial MVP entirely serverless: Lambda for APIs, DynamoDB for database, SQS for queues, S3 for storage. The entire infrastructure was defined in CDK code. When they needed to scale to 10 times their initial load, nothing changed. No database optimization, no load balancer tuning, no late-night crisis calls. It just worked.
Compare this to a startup using EC2: they'd need monitoring, auto-scaling groups, probably a database that needs tuning, load balancers, and someone on call for incidents. The operational burden is orders of magnitude higher.
Things the Serverless Advocates Get Right
Event-driven architecture becomes natural. Instead of building complex message queues and workers manually, you connect Lambda functions directly to SNS topics, S3 bucket events, or API Gateway. The platform handles the plumbing. This style of architecture scales beautifully because there's no coordination—each function independently responds to events.
Cost visibility is real. With traditional infrastructure, you're paying for provisioned capacity whether it's used or not. With serverless, every single invocation is tracked, and you can see exactly what's expensive. We found that one Lambda function—representing 0.01% of our invocations—was responsible for 25% of our costs because it made expensive external API calls. That visibility saved us money.
The Gotchas in Practice
Concurrency limits will surprise you. Lambda has a default concurrent execution limit (usually 1000). If you hit this limit, new invocations get throttled. We discovered this the hard way when a data import job started throttling other critical functions. You need to request limit increases and understand burst capacity, which isn't the straightforward story the marketing materials tell.
VPC networking adds latency and complexity. If your Lambda needs to access a private database in your VPC, you add 10-100ms of latency. Most tutorials show Lambda without VPC, which is fine for public APIs but unrealistic for enterprise workloads. The VPC networking setup introduces additional failure modes.
Dependencies bloat your function. If you import the AWS SDK, pandas, or any heavy library, your function size grows. We had a team trying to run data science models in Lambda—they spent weeks optimizing function size and realized they should have just used SageMaker or EC2 instances instead.
When to Actually Use It
Serverless is genuinely the right choice for:
Real-time APIs with spiky traffic - E-commerce sites, APIs that scale from 10 to 10,000 requests per second
Background jobs and event processing - Image resizing, log processing, data ETL, webhooks
Prototyping and MVPs - Get something working fast, scale later if needed
Multi-tenant applications - Charge per function invocation, operational cost scales with revenue
Scheduled tasks - EventBridge can trigger Lambda on a cron schedule, beats maintaining a scheduler
Microservices with clear boundaries - When each service has distinct scaling requirements
Don't use serverless for:
Long-running processes - Batch jobs over 15 minutes, real-time processing pipelines
Applications requiring persistent server state - Complex session management, long-lived WebSocket connections
Heavy CPU workloads - Training models, video encoding (use specialized services instead)
Workloads with extremely tight latency requirements - When every millisecond matters
The Real Verdict
Serverless is genuinely transformative—but not universally. It's a tool optimized for a specific problem: building scalable applications without dedicated operations teams. For startups, it's often perfect. For enterprises already running massive infrastructure teams, the advantage is less clear.
The best serverless architectures I've seen were built by teams that understood what they were trading off. They knew their cold start latency tolerance, understood the cost model, had observability from day one, and didn't try to run their entire infrastructure on it.
If you're building anything with spiky, unpredictable traffic in Vietnam or anywhere else, serverless deserves serious consideration. Just go in with eyes open about the constraints.
At Idflow Technology, we've helped teams across Vietnam navigate these trade-offs—both the hype and the reality. Whether serverless makes sense depends entirely on your workload, and that's a conversation worth having before you commit.