Settings

Theme

Serverless, Inc. lands $10M Series A to build serverless dev platform

techcrunch.com

191 points by TheMissingPiece 7 years ago · 203 comments

Reader

paulgb 7 years ago

I don't understand the knee-jerk opposition people have to serverless architectures. I recently developed a service[1] with the serverless framework and it was the first time I enjoyed developing server-side code since the era of PHP on shared hosts, where you could just upload code and refresh the page. There's something freeing about never having to think about the server process or what happens if the server is power cycled.

So congrats on the funding, I hope you can convert some haters :)

[1] obligatory plug: https://notify.run

  • Zelphyr 7 years ago

    It feels like a giant step backward from a development standpoint. It's as bad or worse than the days when I had to make a change, save it, FTP that file to the server, refresh, lather-rinse-repeat. Want a debugger? Nope. Want log files? Gotta get them from a different service that frequently has a lag of 30 seconds or more[1]. Don't get me started on the massive vendor lock-in inherent with a Serverless Architecture[2].

    Don't get me wrong; from a resources management and scaling perspective it's great. But that doesn't outweigh the massive pain during development that it creates.

    We're in the process of rebuilding a project to move everything out of a Serverless Architecture. After six months of building it on serverless we finally all agreed it was a big mistake.

    [1] That's our experience with AWS. May be different with other providers.

    [2] I recognize that Serverless Framework helps mitigate this but that's just yet another abstraction on top of abstractions in my opinion.

    • laumars 7 years ago

      I don't disagree with you that serverless is a step backwards if you want control about your running platform but your debugger example can be argued away by the fact that you shouldn't really be debugging a live platform. If your code is portable enough to run "serverless" then you should be able to spin up a docker container or whatever and run the same tests in there.

      I think the issue is some people see "serverless" as an all or nothing scenario which it really isn't. Some problems are solved well with serverless and some are not. It's like the container vs virtual machine argument. One isn't designed to replace another - they're just different tools for solving different problems; each have their own strengths and weaknesses.

      • Zelphyr 7 years ago

        How do I debug locally/on Docker with services like SNS and ElasticTranscoder? I know there are emulators but those have gaps in services that providers like AWS offer. Not to mention, I'm debugging with an emulator and not the real thing.

        • scarface74 7 years ago

          I can’t speak for ElasticTranscoder, but there is nothing stopping you from running your lambda function locally and still connect to all of the AWS services.

          Just like people have been calling their controller actions from test harnesses for ages to test their APIs. You can call call your handler the same way.

          At the end of the day, AWS is just passing you a JSON message. You can call your lambda function manually with the same JSON payload locally.

          • laumars 7 years ago

            This is exactly how we test our lambda code that call AWS media encoders

        • laumars 7 years ago

          You could run the docker container on an EC2 instance or have an SSH bastion / VPN tunnel between yourself and your AWS VPC.

          I tend to opt for the SSH tunnel route for personal projects but use OpenVPN with a secrets management (eg Vault or AWS's new key store) for professional projects (ie when you have several people on your team).

          AWS is versatile enough that you do actually have several options available to you.

    • matchagaucho 7 years ago

      Serverless architecture certainly benefits from a localhost-first environment with mock data and unit test feedback prior to deployment.

      But then... any architecture benefits from localhost-first.

      • Zelphyr 7 years ago

        We found that the further we got into using different services the more difficult local development became. I just assumed that, as in the old days of web development, we would be forced into not being able to do local development at all. So then we'd have to have duplication of services. One for development and one for production. Or multiples for development so that each developer didn't step on the toes of the other. There goes the cost savings touted by Serverless Architecture.

        These are problems that will be solved eventually, I have no doubt. But they haven't been solved yet and that is what makes Serverless Architecture not ready for prime time in my opinion.

        • laumars 7 years ago

          It's good practice to have separation of dev and prod service anyway. I don't see that as duplication, I see that as different operating environments.

          Even back in the days of self hosting, I've lost track of the number of times a Dev script has gone wild and caused excessive load on the RDBMS or tried to spam the SMTP relay. On a Dev environment you can catch that without affecting live services.

          As for the other issues you meantioned, this is nothing new. There have always been appliances in IT. Whether it is physical rack mountable appliances like hardware video encoders, or SaaS solutions on a public cloud like AWS media transcoders, you interact with then the same. The only difference with public cloud solutions is you you first have to bridge your office network with your AWS (or whatever) virtual private network. Thankfully you have a variety of tools at your disposal to do that. (Over the years I've used no less than 4 different methods to bridge a local network with an AWS VPC - each taking advantage of their own specific business requirements).

          • mmt 7 years ago

            > There have always been appliances in IT.

            Indeed, and, as a general rule, they were astonishingly expensive, especially in up-front purchase cost.

            > The only difference with public cloud solutions

            I posit that another difference is that the up-front cost is approximately zero. That makes the adoption decision much easier (even possible in the first place) for much smaller companies, especially startups.

            That also means what would otherwise have been an up-front cost is hidden in the pay-as-you-go cost.

            That cost-hiding does create a new problem, albeit a sublte on. A startup faced with having to buy a $300k appliance might think nothing of it if there were a smaller/dev version available for $30k for that environment. However, if that appliance actually needed to be duplicated in the full $300k form [1] in both (or maybe more, if they have stage/qa/integration, research, and/or multiple dev envs), that startup would take a serious look at alternatives. Those relative costs aren't as stark with the cloud version of applicances, until well after the choice has been made.

            [1] Or, worse, an even more expensive version, if dev, testing and/or research use of the appliance is heavier than production. I expect that's a rare, outside of storage.

      • jjeaff 7 years ago

        Localhost first is ideal. But not always possible.

        • eyjafjallajokul 7 years ago

          Can you explain why?

          • Zelphyr 7 years ago

            Why its ideal or not always possible?

            Ideal because it's faster development time. And free since I'm not paying Amazon compute time to run my code while I write it.

            Not always possible because I don't have AWS's services locally. As mentioned elsewhere, there are emulators but, by virtue of their emulation, may not always perform like the real thing. That's a debugging ball of wax that I personally hate having to untangle.

            • scarface74 7 years ago

              Which services are you using for which you are concerned about cost while you are developing?

              You can access most (all?) of AWS services without running on AWS EC2/Lambda.

            • chii 7 years ago

              Can't these services be mocked out? Surely you are wrapping the service API with your own code to prevent vendor lock in right?

              • jdo20bbx 7 years ago

                Doesn’t everyone just spinup and teardown either an EC2 based Docker host or and EKS worker as needed?

                Especially for CRUD apps and otherwise simple apps, testing should start after git push.

                Don’t want to wait each time? Automatic build and teardown of test envs on a schedule, to avoid always up.

                I still local test low-level code. But anything running in a cloud should really just be tested there.

          • jjeaff 7 years ago

            Proprietary software or hidden configurations. I have my site on localhost docker container. But I can't spin up an Aurora instance. So I use a similar mysql container. But it's not exactly the same.

            And sometimes the data is too large to have fully on localhost. You can work with a subset, but if you have a 2tb database, it's not feasible to have the whole thing local for testing.

    • dsp1234 7 years ago
      • Zelphyr 7 years ago

        That's great. I'm sure Amazon and Google will follow suit. But they're not there yet and Amazon is the big guy that most people are on.

        As I've said before; these are all problems that all of the cloud providers will solve eventually. But until they do I'm not sold on building a production application on a Serverless Architecture.

    • inopinatus 7 years ago

      These are complaints about the immaturity of the implementations. About which I’m inclined to agree but still recognise that the architectural style has legs.

    • stcredzero 7 years ago

      It feels like a giant step backward from a development standpoint. It's as bad or worse than the days when I had to make a change, save it, FTP that file to the server, refresh, lather-rinse-repeat.

      So what if I gave you a command-line tool to simply make the current code in your development environment live?

      Want a debugger? Nope.

      And I also gave you live step debugging, plus replays of recent server state?

      Want log files? Gotta get them from a different service that frequently has a lag of 30 seconds or more.

      And the logs from the instance you're debugging are available instantly?

      • hayd 7 years ago

        I'm a big fan of serverless, but these are valid questions to which there really ought to be better answers than "nope".

  • rawrmaan 7 years ago

    I think a lot of people have tried “serverless” and found it to present more challenges than it solves. How, for example, do you connect to a Postgres database from Lambda/Cloud Functions? As far as I can tell, the answer is: You don’t, you use a different database.

    No-worries devops experiences are nothing new. See Heroku.

    • yoran 7 years ago

      I think the example you give is more because there's a mismatch between technologies, rather than the "fault" of serverless. In serverless, your endpoints become infinitely scalable. This doesn't go well when they're backed by technologies where there is a hard limit on the number of connections, for instance SQL servers or a Redis server. I think therefore that SQL database technologies have to adapt to the serverless paradigm rather than dismissing serverless "because it brings so many other issues". I think AWS has already started that with Amazon Aurora.

      That aside, I agree that there are a lot of secondary concerns that are important when running things in production but that aren't available out-of-the-box when you run something on AWS Lambda. I'm thinking about error monitoring, performance monitoring, logging,... All those things need to be set up and that's quite time-consuming.

      However, I think that's more due to serverless being relatively new and not as mature as the traditional way of doing. I don't think it will take long before we'll have the equivalent for serverless of adding `gem 'newrelic_rpm'` to your Gemfile and magically having performance and error monitoring across your app.

      • rantanplan 7 years ago

        > I think therefore that SQL database technologies have to adapt to the serverless paradigm rather than dismissing serverless

        Empty statement that means nothing. SQL/RDBMS is backed by computer science and robust engineering examples that make the world spin. Alternatives are usually full of fanfare and false promises.

        > I think AWS has already started that with Amazon Aurora.

        Just in time when we were talking about fanfare. Spend 10-20 minutes searching on the Internets to see what actual experiences people have with it.

        Its 3X write increase? Bollocks. Usually the performance is worse than when you administer your own DB(Postgresql/MySQL). You might(or not) see some read-performance increase, which... well everyone can scale on reads so I don't see the point.

        I suspect it has other goodies pertaining to administration/provisioning, but performance/scaling is not one of them.

        • athenot 7 years ago

          >> I think therefore that SQL database technologies have to adapt to the serverless paradigm rather than dismissing serverless

          > Empty statement that means nothing. SQL/RDBMS is backed by computer science and robust engineering examples that make the world spin. Alternatives are usually full of fanfare and false promises.

          Traditional relational databaases have indeed solved many issues that some newer datastores struggle with. But the flip side is that it is non-trivial to design traditional databases that are not Single Points Of Failure.

          Storing data is surprizingly hard in a cloud environment, and involves trade-offs. Reaching a comprehensive solution (fast, HA, consistent, easily recoverable, scalable volum, evolvable schemas...) is hard no matter what technology you pick.

        • justincormack 7 years ago

          Not being able to handle more than a few connections without connection pooling is nothing to do the using SQL. It is just a different bit of optimization that is needed to support fast transient connections without pooling.

      • rawrmaan 7 years ago

        It just seems to me like the "infinite scalability" promise of serverless is only realistic if you have no database. Because inevitably, you'll hit database scaling issues due to query patterns and suboptimal indexes LONG before you'll have a hard time scaling up your fleet of servers because you're getting too many requests.

        • zaarn 7 years ago

          I'd also like to see the AWS bill once you hit infinite scale. AWS is already pretty expensive on their own...

    • ryanworl 7 years ago

      You connect the same way as you do in a regular app. Which is to say, you open the database connection outside of the request handling method (for example, as a global) and then use it from within the request handling method. When your app wakes up again for another request, the database connection is still open and you just use it.

      • qudat 7 years ago

        As far as I know this works but more as a hack not as a robust officially supported solution.

        • scarface74 7 years ago

          How is that a “hack”? You create your DB and you get a connection string to a publicly accessible database or you create it inside a VPC and you configure your lambda to run inside a subnet within your VPC and you configure your security group. This can all be configured within the console.

          • nrb 7 years ago

            The main issue with this approach is that running your lambda in a VPC results in painfully slow cold starts, on AWS at least.

            • paulddraper 7 years ago

              IDK why the parent mentioned VPC. It's not necessary.

              • scarface74 7 years ago

                Without a VPC, how do you not expose your Aurora cluster to the world?

                https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Auror...

                Aurora DB clusters must be created in an Amazon Virtual Private Cloud (VPC). To control which devices and Amazon EC2 instances can open connections to the endpoint and port of the DB instance for Aurora DB clusters in a VPC, you use a VPC security group. These endpoint and port connections can be made using Secure Sockets Layer (SSL). In addition, firewall rules at your company can control whether devices running at your company can open connections to a DB instance. For more information on VPCs, see Amazon Virtual Private Cloud (VPCs) and Amazon RDS.

    • brylie 7 years ago

      I may misunderstand your comment.

      That said, where I work (MaaS Global) we have a production PostgreSQL database hosted on AWS Relational Database Service (RDS):

      https://aws.amazon.com/rds/postgresql/

      We connect to the AWS RDS instance in our lambda functions using an ES library called knex.js and some environment variables to store the DB credentials:

      https://knexjs.org/

      • joecot 7 years ago

        How do you deal with the 10+ second cold start times for Lambda when using it in a VPC? Are you pre-warming your lambda functions? Did you open up your RDS instance to the world so you could connect to it from a public lambda network? I know you had to pull some magic, because I've been down that road.

        It's been a problem for years and there's been no sign of a solution. Example article from last month: https://medium.freecodecamp.org/lambda-vpc-cold-starts-a-lat...

        These are the sorts of problems that turn people off from using serverless architectures.

        • kauju 7 years ago

          Your right that the cold start times are not ideal. But you get a huge free request load per month. Put an uptime pinger on it and keep it warm. Or do what I do and write your functions in golang. My average cold start time is around 4 seconds.

          For the DB connection you put the lambda in the same vpc that the RDS exists in. Then you open the connection pool and reuse it if its active. Not that a new connection is a big overhead over leveraging an established socket.

          Wonder where all this misinformation is coming from on lambda DB access issues.

          • sbov 7 years ago

            I know uptime pingers are easy and obvious solutions (I use them myself), but everytime I have to resort to this sort of hack it reminds me of how immature serverless is.

          • joecot 7 years ago

            Here's the problem. Uptime pingers work great if you have a low volume service. You keep 1, 2, or maybe 3 instances of the function warm, and you don't have to deal with cold start times. But there's 2 places that idea falls seriously flat.

            1. This doesn't work if you were actually trying to build your API as microservices. You might have 60+ functions, some which call each other, and keeping them all warm is not really a good option.

            2. Keeping a minimum number of instances warm fails to account for half the point of using serverless architectures: being able to scale. Sure, if you have little to no traffic, you can keep a couple instances warm and be up, but if your app needs to scale to 5 or 10 or more instances to handle bursts of traffic, the surfers who hit that cold start end up dealing with an extremely bad experience.

            More importantly, as Lambda gets more popular, uptime pingers get less and less useful because of tragedy of the commons. The reason for needing cold starts at all is that AWS is rotating out instances to be able to keep up with overall demand with limited resourcs. If only a few people are sending heartbeats to their instances, their instances stay in rotation because other people's get rotated out instead. If everyone is sending heartbeat requests, some of them will still end up getting rotated out, and therefore everyone will need to increase the frequency of the heartbeat requests to keep their functions warm. It's not a sustainable solution, and I'm baffled that AWS tacitly promotes it as a resolution to the problem they themselves have caused.

            It's been years. AWS needs to fix Lambda VPC cold starts.

            • wahnfrieden 7 years ago

              AWS is fixing it by moving to IAM authentication in the serverless ecosystem, rather than network segmentation. Serverless Aurora will support IAM auth at scale via its HTTP query protocol.

              Keeping Lambda functions warm is great until you have 2 or more requests hitting the function simultaneously. They won't queue behind the pre-warmed function, they will spin up additional Lambda containers to serve in parallel. Unless you don't expect to get concurrent requests, there's no effective way to pre-warm Lambda functions.

          • dpeck 7 years ago

            Have any data on the variance on that 4 second average? That sounds very tolerable on its face.

        • jchrisa 7 years ago

          My employer offers FaunaDB with a pay-per-request pricing model. To bypass cold-start lambda issues, I code the app to talk directly to the database. For certain richer functions I might invoke a Lambda, but for basic crud operations the database access control does the trick. And no cold-start issue.

          Here's the data model part of my todo app if you want to see queries in the app: https://github.com/fauna/todomvc-fauna-spa/blob/master/src/T...

          • joecot 7 years ago

            AWS also has NoSQL cloud solutions, particularly DynamoDB, and maybe SimpleDB if you want to risk building on a someday deprecated service.

            Those options work fine, if you were OK with using a NoSQL DB. But what if you wanted to use an actual relational database? For that you pretty much need Lambda in VPC, and it's not really usable because of the cold start issue.

            At some point Amazon will release Aurora Serverless[1], giving a serverless option for an on demand relational database. Will that work somehow with Lambda without needing VPC, therefore defeating the cold start issue? What cold start issues we'll it have itself? I guess we'll wait and see for now.

            1. https://aws.amazon.com/rds/aurora/serverless/

          • cmjqol 7 years ago

            > My employer offers FaunaDB with a pay-per-request pricing model.

            Tried FaunaDB few month ago the latency was beyond 200ms for a simple a read , and beyond 600ms for an insert.

            Would not recommend it at this point.

            • jchrisa 7 years ago

              We don’t expect you to see that lag. Other users don’t see it or haven’t reported it. What region are you accessing in and how did you generate the result?

        • com2kid 7 years ago

          > How do you deal with the 10+ second cold start times for Lambda when using it in a VPC?

          And I was complaining about 500ms cold start times on Firebase Functions.

          I think I'll stop complaining now.

    • wahnfrieden 7 years ago

      It’s still the early days so there are pain points, but Amazon already announced a solution to this: Serverless Aurora. It’ll be some time still until it’s public and Lambda-friendly though. And MySQL comes before Postgres.

    • nodesocket 7 years ago

      With Google Cloud Functions (they got the name right), you can simply link with a Cloud SQL instance using a special local socket interface provided by Google Cloud[1]. Their documentation provides complete examples as well on how to use global connection pools for MySQL and PostgreSQL.

      [1] https://cloud.google.com/functions/docs/sql

      • rawrmaan 7 years ago

        Okay, that's a great solution. I actually had no idea that was possible.

    • all_blue_chucks 7 years ago

      > How, for example, do you connect to a Postgres database from Lambda/Cloud Functions?

      Exactly the same way you would with an EC2 instance...

      • tango12 7 years ago

        Might not be that easy because only postgres can take a few hundreds of connections which won't work out if you have a few thousands of serverless functions? No persistent connection pools.

        • somebodythere 7 years ago

          Well, just put the connection logic outside of the main handler so it's shared between invocations!

          Wait, oops, you have state now.

          • rhlsthrm 7 years ago

            Realistically could you do this with Redis or something similar? Not sure about security implications of this though...

      • GordonS 7 years ago

        How do you handle database connection pooling?

    • ironjunkie 7 years ago

      I fully agree with you. Based on my experience, Lambda is perfect if you only want to perform some relatively standalone task (such as a compute intensive rendering). As soon as it needs to connect to 3rd party entities, it becomes very slow and loses some of its benefits. Connecting Lambda to an AWS DB for example is challenging to say the least. It also takes a couple tens milliseconds just to setup the DB socket and connection, that on a normal server can simply remain open and wait for the next request.

      Serverless is nice, but the ecosystem of serverless tool is really missing today IMO

      • scarface74 7 years ago

        Connecting Lambda to an AWS DB for example is challenging to say the least.

        Why does this keep getting repeated? You get a publicly accessible host and use the same drivers you use on prem or you put both the lambda and the database inside your VPC.

    • bmilleare 7 years ago

      As others have pointed out, connecting to RDS from Lambda is actually pretty trivial by having them run in the same VPC. We actually came up against an issue where our Lambda function needed access to RDS but also to the outside World which meant some extra hurdles to jump [1] but overall our experience with Lambda has been a positive one.

      I don't think we'll be considering a fully serverless architecture anytime soon due to cold-start times, but it's awesome for anything outside of the user request/response loop or internal microservices where response time is perhaps not such of a problem.

      [1] https://aws.amazon.com/premiumsupport/knowledge-center/inter...

    • dhdersch 7 years ago

      Umm, for AWS, you place the Lambda function into a VPC and open a db connection the same way you would from another server?

      • spullara 7 years ago

        I think what they are referring to is that Postgres and most other databases were built in the before time, the long long ago when every connection was a process and you limited concurrency of connections in the configuration file. If you have a 1000 concurrent calls on lambda you aren't going to be able to have them all talking to the same database at the same time. You'll run out of connections and the application will crash. Same reason you see this happen to PHP web applications when HN or Slashdot is pointed at them and they say they can't connect to the database. They have hit the concurrency limit. Connection pooling solves this problem but currently requires another layer between the application and the database.

    • scarface74 7 years ago

      What do you mean, how do you connect? The same way you connect on prem. Using, the Postgres drivers. You can either have a publicly accessible Postgres instance with a DNS entry (not recommended) or you can run both the DB and the lambda inside of a VPC.

    • dpeck 7 years ago

      That’s really what’s keeping me from developing stuff with it. I can get the job done with dynamodb but I’m really looking forward to building something with Postgres in a “serverless” context.

    • ac360 7 years ago

      imo, the most interesting data point is how in demand serverless architectures are WITH these issues. Imagine how compelling serverless will be once these issues are solved.

  • chx 7 years ago

    Serverless is a buzzword for a distributed system you can't SSH into.

    • paulgb 7 years ago

      I can't tell if that's meant to be a pejorative, but a distributed system I can't (and don't need to) SSH into makes my life easier :)

      • old-gregg 7 years ago

        The comment above probably meant to say that any environment becomes "serverless" as soon as you stop managing instances/servers manually and interact with it strictly via an API endpoint, which of course has always been possible but required fairly in-depth knowledge of AWS APIs (or other cloud provider's).

        Look, there's nothing dirty about re-framing the conversation every once in a while. It may feel like BS at first but over time we attach new meaning to old words and it becomes easier to communicate capabilities.

      • lowercased 7 years ago

        can't and not needing to are two really different things. the 'not needing to' just being shifted to having to use proprietary environments for doing stuff essential to debugging (like looking at log files) is, for most people, not a net win.

  • manigandham 7 years ago

    Because it should really just be called Functions-as-a-service, and they are great for single-focus reactive/connective processing and 1-off tasks but are not a replacement for everything as people try to use it for.

    • k__ 7 years ago

      As far as I know, serverless isn't just FaaS. As you pointed out FaaS is to connect things, nothing more.

      For me serverless is pay-as-you go pricing, no over- or under-provisioning and last but not least, no server-management.

      Lambda, DynamoDB, S3, AppSync, Device Farm, Aurora, etc. are all serverless.

      • castlecrasher2 7 years ago

        > Lambda, DynamoDB, S3, AppSync, Device Farm, Aurora, etc. are all serverless.

        Yep. Lambda may be functions-as-a-service, which is a great name for it, but the whole shebang is more than that. Thus, serverless.

      • detaro 7 years ago

        How would you distinguish it from "PaaS"?

        • k__ 7 years ago

          The "serverless" model frees you of more tasks than PaaS, like capacity planning and scaling, also it's more flexible and fault tolerant.

    • staticassertion 7 years ago

      Why shouldn't it be called serverless?

      As a consumer, I essentially get to treat it as such - there is no server I need to manage.

      In AWS-land this is a big differentiator over a system I have to manage, such as EC2, that just executes containers (meeting FAAS). Now I have FAAS and I don’t have to manage the infra, which is huge because AWS will be far better at meeting a patching SLA than I will be.

      Yes, obviously there is a “server”, but I don’t have to think about it.

      • manigandham 7 years ago

        No server to manage is the least important part of it all. It should describe the abstraction that's actually being offered: IaaS is servers, PaaS is a single app, FaaS is individual function, SaaS is just software.

        Any of those other than IaaS itself can be "serverless".

        • staticassertion 7 years ago

          > No server to manage is the least important part of it all.

          OK, well it's the most important part for me. I don't want to manage patching a box.

          • manigandham 7 years ago

            Sure, but there have always been options for that, regardless of abstraction level.

            • staticassertion 7 years ago

              I don't know of an abstraction that provides FaaS + Managed System other than serverless - what are you referring to?

      • detaro 7 years ago

        We called that "PaaS" for years, why does it need a new name suddenly? Especially a new name that can cause confusion with things that actually do not have a dedicated server?

        • staticassertion 7 years ago

          I think PaaS is overly broad.

          And yes, marketing is a real thing, and names are also picked because they sound cool. There are worse things.

  • Yver 7 years ago

    I can't say much about its technical merits but I find it misleading that an application be described as "serverless" when in actuality it very much uses servers. The "serverless" moniker comes from the Serverless Framework, produced by Serverless, Inc. "Serverless" is the name of a product and should be capitalized accordingly.

  • everdev 7 years ago

    I think the biggest knock against serverless is the same as microservices: your points of failure grow significantly.

    • staticassertion 7 years ago

      I don't see how. Your points of failure were always there, but at most they're more explicit since "service failure" conflates with "network failure" - something I think is beneficial, since you always had to handle "service failures" but they were implicit.

      That is - a piece of code in a monolith may fail, and a piece of code in a microservice may fail, but I've already written the code to handle network errors for the microservice case, which means I've also implicitly handled the "bug in code" case.

      • SahAssar 7 years ago

        That assumes that a local function call has the same failure rate as a remote function call to a microservice, which in my experience is very much not true.

        If I have a local function in the same language I can pretty much assume that a call to that function will actually call that function. With a remote call over HTTP or whatever I can't, so that is an additional failure I need to handle.

        • staticassertion 7 years ago

          I'm not sure I understand. Why would an RPC call a different function than what you expect?

          I'll grant you that there is more complexity in this approach, but I believe that fault tolerance is something you improve with microservices, not something you regress on.

          • squeaky-clean 7 years ago

            It's not that it would call a different function, but that sometimes the RPC will fail to call the function. You can't get a network error calling a function which is in memory on the same process.

            • staticassertion 7 years ago

              This is what I was saying though.

              Yes, you have to handle the failure case of "network call". Microservices add this failure case. But you already had to handle the case of "code blew up because of a bug".

              By forcing you to handle comm errors like network failure, you also force people, implicitly, to handle "code blew up because of a bug" errors. Even though it adds a second error case, you pretty much handle them the same way in the same place.

              There was always an error case - the fact that code may blow up.

              • fauigerzigerk 7 years ago

                >But you already had to handle the case of "code blew up because of a bug".

                I'm handling bugs very differently than network failures though, because network failures are usually temporary while bugs are usually (or even by definition) permanent.

                Dealing with temporary outages in lots of places is extremely difficult. You may need retry logic. You may need temporary storage or queuing. You may need compensating transactions. You may need very carefully managed restarts to avoid DDOSing the service once it comes back online. There may be indirect, non-deterministic knock-on effects you can't even test for properly.

                Microservices cause huge complexity that is very hard to justify in my opinion.

                • staticassertion 7 years ago

                  > I'm handling bugs very differently than network failures though, because network failures are usually temporary while bugs are usually (or even by definition) permanent.

                  Depends on the bug - there are transient bugs that are not networking related.

                  But let's assume it's a "hard error" ie: a consistently failing bug. I would say where that bug is makes a huge difference.

                  If it's a critical feature, that bug should probably get propagated. If it's a non-critical feature, maybe you can recover.

                  By isolating your state across a network boundary, recovery failure is made much simpler (because you do not need to unwind to a 'safe point' - the safe point is your network boundary).

                  But it often depends how you do it. I personally prefer to write microservices that use queues for the vast majority of interactions. This makes fault isolation particularly trivial (as you move retry logic to the queue) and it scales very well.

                  If you build lots of microservices with synchronous communications I think you'll run into a lot more complexity.

                  Still, I maintain that faults were already something to be handled, and that a network bound encourages better fault handling by effectively forcing it upon you.

    • moduspol 7 years ago

      The points of failure you are responsible for maintaining decrease significantly, though.

      And the ones you aren't responsible for are ones that the rest of the world uses every day at 1000x the scale you do.

  • skohan 7 years ago

    I just hope the API has stabilized. I worked on a project a couple of years ago using serverless, and we sunk a ton of time into fixing breaking changes after updates.

  • tenaciousDaniel 7 years ago

    I help run a Node server, and we have some jobs that aren't run through CRON, but instead are set up using in-memory intervals. So I'm considering using a serverless setup to offload the work.

    In your opinion, it easy to differentiate between dev/prod environments for development? How about logging?

    • qudat 7 years ago

      Logging by default sucks for Lambdas. Because lambdas are not "servers" they do not have an external IP address, which means that if they want to communicate with the external world, you need to setup a NAT for it.

      Differentiating dev/prod is not too bad, everything is labelled based on your naming scheme for serverless.

      Having said that I think lambdas are a great use case for cron jobs.

  • nostalgeek 7 years ago

    > I don't understand the knee-jerk opposition people have to serverless architectures

    because it's purely a marketing concern, it means nothing from an architectural standpoint, it"s a buzzword, like 'cloud', 'nosql', 'web 2.0' or 'the blockchain'.

    • wnsire 7 years ago

      > because it's purely a marketing concern, it means nothing from an architectural standpoint

      Serverless computing is an system architecture design that focus emphasis is abstracting the entirety of the infrastructure to let developers focus solely on code.

      Per say , with AWS Lamba + API Gateway + S3 developers can create Web Application that usually required EC2 Servers and web framework like Spring , Laravel etc...

      Apps hosted on AWS with EC2 requires strong knowledge in system architecture and system design . They also are quiet complex to scale , monitor and manage.

      Serverless abstract all this and lets you focus only on code.

      AWS Lambda is self healing meaning when a function crash , only that function crash not the entire server , API Gateway is managed by AWS it won't crash ( or very unlikely ) S3 , as i'm concerned , as never let me down.

      Meaning I could author an entire application similar to Hacker News without any "Server" logically speaking, technically speaking there is always a server just like NoSQL doesn't mean "NO SQL" but "Not Only SQL" and is not buzzword :)

  • na85 7 years ago

    I think it's due to the fact that the terminology seems intentionally misleading at worst, and like a marketing buzzword at best.

    Every technical person understands there's still a server there. So it seems like a marketing tactic intent on misleading clueless CEOs.

    • dragonwriter 7 years ago

      > I think it's due to the fact that the terminology seems intentionally misleading at worst, and like a marketing buzzword at best.

      Originally it was a marketing buzzowrd designed to make Amazon's FaaS offering seem like a bigger deal than it was, and somewhat misleading in that role because FaaS of the type it was applied to aren't any more serverless (even from the perspective of what the customer needs to manage) than common PaaS offerings.

      OTOH, I kind of like the way Google seems to have adopted it as a blanket term for cloud services where the customer is not concerned with physical or virtual servers as such; it seems the term.is being wrestled into being descriptive and non-deceptive.

      > Every technical person understands there's still a server there. So it seems like a marketing tactic intent on misleading clueless CEOs.

      The non-existence of servers isn't what it communicates, only technical people who also lack any sense of what matters to business world think that. (It's also targeted more to CTOs/CIOs than CEOs).

    • paulgb 7 years ago

      > Every technical person understands there's still a server there.

      Everybody also knows that a wireless vacuum has wires inside it, the value prop is that the wires never get in your way. And so with serverless.

  • pbreit 7 years ago

    I don't think it compares to the move to the cloud. The cloud was a lot easier to setup than dedicated and the end result was basically the same...you got a server. Serverless is quite a different paradigm and the benefits are less obvious.

  • matte_black 7 years ago

    Lack of connection pooling for database connections means serverless architecture will never be used for serious data heavy apps.

  • rightbyte 7 years ago

    Serverless in this context is Server-aaS. It's doublespeak. That's why I "knee-jerk".

    Instead of a couple of servers you have thousands of small virtual servers running on real server and call that serverless. It's comical.

    I wonder what hell this approach will be for legacy serverless systems where there is stuff running everywhere and no one got a clue where to pull the plug or patch stuff.

  • gaius 7 years ago

    I don't understand the knee-jerk opposition people have to serverless architectures

    It’s the name. It infuriates people because obviously there still really are servers. But I think of it like WiFi - there still are wires, you just don’t see them.

    • mmt 7 years ago

      I've read this analogy before, but it's weak enough to be inapplicable.

      With wireless networking and even phones, the wires have actually been eliminated, replaced wholesale by something else, radio. A better analogy would be something like the powerline ethernet systems, but nobody is calling them something controversial like "wireless" or "cableless".

      Put another way, I've seen cloud computing (of which "serverless" is, arguably, merely an evolution into increasing levels of abstraction) called "somebody else's servers". The equivalent with the Wifi analogy would be "somebody else's wires", and, usually, what wires there are, for the backhaul [1], aren't even somebody else's.

      EDIT:

      Ultimately, my point is this:

      The analogy is weak becasue WiFi is not an abstraction layer on top of wired networking that merely hides the existence of (and, ideally, some of the downsides of dealing with) the wires, instead being a different technology with different upsides and downsides. Serverless is such an abstraction layer.

      [1] Which brings up a nitpick: Wifi can remain even wireless in its entirety, with various mesh networking techniques.

    • laumars 7 years ago

      Except there literally isn't any wires in WiFi. Unless you're talking about power cables on access points or other unrelated tech used on devices that have ethernet / whatever AS WELL AS WiFi?

      • pvg 7 years ago

        All-caps italics is an interesting choice. Is that both emphasized AND shouted?

        • laumars 7 years ago

          Haha yeah on reflection it does look a bit silly. I was intending for it just to stand out against the camelcase of "WiFi" but the caps was a poor choice in hindsight.

sebringj 7 years ago

I ended up using lamda for several things ad hoc and I have to say the experience is great to quickly add functionality to specific niche things you don't necessarily want to run on your API servers because of weight or simply that having a trigger built in makes the whole flow simpler. However, the downside is if you do something stupid like create an infinite trigger accidentally your bill will exponentially increase. Always remember that part. On a $5 digital ocean instance, they will never charge you $3000 a month for accidentally doing something stupid but AWS will forgive you one time at least and I have my one time now. The most hilarious part of this whole thing...and this is really one of the main points, the $5 digital ocean kuejs (nodel.js) server instance that my AWS lamda was smashing the shit out of with millions of requests did not go down the entire time although had some intermittent slow downs of course. $5 goes a long way apparently.

_Marak_ 7 years ago

Congratulations to Austen and Serverless Inc. for raising an additional 10m dollars. From what I've seen they are all very nice people over at Serverless.

I've been running a small service similar to their new "Serverless Platform" for some time and was approached by them in 2014 to see about joining their team.

Ultimately I ended up deciding not to join because I wasn't convinced there was a strong enough engineering presence in their leadership to make a good product. The next couple of years should be interesting to watch if they can actually build a profitable product.

  • nodesocket 7 years ago

    > I wasn't convinced there was a strong enough engineering presence in theit leadership to make a good product

    That seems like a strange requirement. Ultimately for a business to be successful there has to be people that know business, marketing, and slaes. If the leadership team is all hard-core tech engineers, there will be a lack of all of the other social and fundamental business skills needed.

    • spamizbad 7 years ago

      I think it’s perfectly reasonable given their domain. If you’re in a technology-frontiering business, of which serverless most certainly qualifies for, you need management that’s going to support engineering through the litany of challenges they will face. I’ve seen founders get cold feet and cut corners or pivot away from things when the engineering side seems daunting.

      The other side here is your customers are likely engineers themselves. You need to build products that connect with them and genuinely make their lives easier... if your leadership is too far removed from that you’ll end up with a product platform shapes via a game telephone...

      With that said, some strong hires early on can make a real difference here.

    • drchickensalad 7 years ago

      Not the OP, but they only said it wasn't strong enough, not exclusively engineers. I see plenty of grey area here.

pmlnr 7 years ago

The coming age of people with no understanding of what running code actually means, no idea how hardware/close to hardware systems behave deep down, is going to be fabulous, and full of wasted computing.

  • dragonwriter 7 years ago

    > The coming age of people with no understanding of what running code actually means, no idea how hardware/close to hardware systems behave deep down, is going to be fabulous, and full of wasted computing.

    Just like now is to people who were programming in the 1990s, or the 1990s to people who were programming in the 1970s.

    • pmlnr 7 years ago

      Partially agreed; on the other hand, Arduinos & PIC helped a lot to get, say, hardware IRQs, something that is quite heavily abstracted away in general computing.

  • jchw 7 years ago

    I disagree. If you build a good enough abstraction, you can give developers a lot for free without needing them to understand. Current serverless platforms are wasteful, but the underlying concept is not inherently wasteful, and I think that cloud providers and serverless software platforms will improve over time.

    • marenkay 7 years ago

      The need to understand is a primary skill for a decent engineer. Was in the 1960‘s and still is today. Lack of Knowledge is what makes an engineer a business risk.

      • jchw 7 years ago

        I don't need to understand metallurgy thanks to processors. I don't need to understand instruction set architectures or assembly language thanks to programming languages. When abstractions truly separate concerns, they make it very much possible to not understand, much to the benefit of the programmer, who now needs to keep significantly less in their head. Aren't you glad you don't have to be concerned with making sure your stack stays balanced thanks to scoping in C? Well, I'd like to write an API without having to be concerned about job scheduling, dispatching, logging, monitoring, etc. Understanding these things may make you a better programmer (or they may not,) but not having to understand let's you let go of things you don't need to care about and focus on what you're trying to accomplish instead.

        We as programmers love writing solutions to problems that have been solved hundreds of times. How many node.js http frameworks exist? Or JS frameworks for that matter? But the thing is, for the most part, when it comes to API servers, there's a crazy amount of overlap between what they need to do. They handle HTTP requests and spit out a response. Someone has written this better than you can, so you use an HTTP library. You probably want to serve multiple endpoints on one port, so you route with the URL. What do you do, implement a radix tree, or use a mature, well-tested, high performance library? If you are sane, probably the latter.

        All serverless is is a realization that most HTTP servers don't need anything special in terms of routing, scheduling, monitoring, etc. By turning an HTTP server inside out, you can allow the developer to focus on exactly one thing and give them a platform that is stable and inherently scalable by virtue of being stateless and making scheduling an implementation detail.

        If you have a very efficient engine to execute functions, and a very robust and scalable HTTP server, and you can write your app logic to be stateless (locally, anyway,) there's no reason to believe that the serverless approach would be any technically worse than the old school approach.

        I feel likewise similar about static files. I don't need to write and operate yet another static file server. I can use Amazon S3 + Cloudfront, or Firebase, or GCS, or any other solution. It doesn't mean I don't know how. It's an admission that I have no special requirements, and would like to focus on the things I'm writing that are particular to my app.

        What's dangerous? Not serverless, for sure. What's dangerous, is programmers implementing everything from scratch because they can, doing their own operations when they don't have the resources to.

  • jedberg 7 years ago

    The people programming in the 1970s feel the same way about today, when you use such high level abstractions like C. That's why programs today require so much RAM and CPU.

    But that is the price of agility. Serverless is just another abstraction on top that increases agility at the price of increased compute.

  • ryanmarsh 7 years ago

    The coming age? I do software development coaching and most every developer I'm contracted to teach doesn't understand what running code actually means (to use your phrase).

    To take it another step, pretty much none of them understand how the JVM or V8 run their code.

  • jssmith 7 years ago

    I think this has been the case for the case for the past 20 years, probably longer, though I'm optimistic that improvement is possible. E.g., perhaps serverless computing is inefficient today but it could help improve hardware utilization.

    • mmt 7 years ago

      > perhaps serverless computing is inefficient today but it could help improve hardware utilization.

      This was a big selling point of virtualization, originally. It was certainly true for environments that suffered from poor utilization due to, say, running one app per (often oversized and/or antiquated) physical servers, as I believe was common for enterprise IT shops.

      Whether this improvement could have been achieved by other technical means (at least in non-Windows environments) is debatable. It's also unclear what percentage of total hardware utilization enterprise IT accounted for back then, and I suspect it was much higher than today.

      For other environments, where virtualization would replace simple Unix time-sharing, it stands to reason that hardware utilization had to go up, if only moderately.

      Interestingly, enterprise IT practices are still so extremely expensive today that moving to a cloud provider is obviously cheaper for them.

  • neom 7 years ago

    What prevents being prescriptive about how the abstractions are consumed or presented?

  • bradhe 7 years ago

    > and full of wasted computing.

    The amount of waste already present would boggle your damn mind.

jaflo 7 years ago

Wooo! I'm happy for them! I am using their stuff right now for a project I'm working on (plug: https://kurz.app/) and I really appreciate the ecosystem serverless is cultivating. Simple stuff like bundling up pip requirements or syncing a local folder with a S3 bucket could be done using a script I write, but through serverless I can just install a package and have it hook in as part of each deploy.

  • brettlangdon 7 years ago

    Wow, this is a really cool application. I needed to cut down a song this week and this is a very cool approach.

    • jaflo 7 years ago

      Thank you! I really appreciate it. Can I ask what you cut the song down for? I think I'll make a ShowHN post soon and am still trying to figure out my market outside of video editors.

danharaj 7 years ago

Here's to hoping ignoring serverless will work as well as ignoring NoSQL worked for me.

  • cphoover 7 years ago

    NoSQL has been and continues to be hugely influential. All major cloud players provide document/object based storage, as well as other NoSQL Solutions. The term "NoSQL" was dumb and overhyped... But I think it's really about using the correct storage solution for the job.

    Non relational data should be stored in a non rdbms. Key-Value stores like Redis are immensely useful as caching layers (but they offer so many more features). Graph databases can be used for data with complex relationships that are not easily modeled. They are also good for seeking strong correlations between related items. (think person A. called person B. called person C. (palantir type searches).). Searches can be done way more effectively in a specialized index, like an inverted index used by lucene/elasticsearch, which also supports things like stemming, synonyms, and numerous other features. These are all "NoSQL" NoSQL is not just mongodb (which isn't nearly as bad as people make it out to be btw).

    Even traditional RDBMS are seeing an influx of NOSQLesque features. Like JSON types and operations in postgres.

    The reason "NoSQL" dbs got popular are because in my experience monolithic large relational databases are hard to scale, and manage once they become too complex. When you have one large database with tons of interdependencies, it makes migrating data, and making schema changes much harder. This in my opinion is the biggest issue (moreso than performance problems associated with doing joins to the n-th degree., which is also an issue.)

    It also makes separating concerns of the application more difficult when one SQL connection can query/join all entities. In theory better application design would have separate upstream data services fetch the resources they are responsible for. That data can be stored in a RDBMS or NOSQL, but NOSQL forces your hand in that direction.

    As it goes for serverless, this just seems like a natural progression from containerization, I'm interested to see where the space goes.

    Personally I think it's foolish to put your head in the sand when the industry is changing, or learning new concepts.

    • danharaj 7 years ago

      > The reason "NoSQL" dbs got popular are because in my experience Monolithic large relational databases are hard to scale.

      I've met a lot of people whomst thought they had to scale that big. Very few handled anything that couldn't run off a beefy postgres installation.

      The purpose of a system is what it does. People don't use nosql to scale because they don't need to scale, so what does it do? People use nosql to not write schemas. That's what it's for, for the majority of users.

      If I need a key value store, I use a key value store. There's no flashy paradigm there. If I need to put a container up on the interwebs, I do it. What's serverless? Nosql is an "idea", "paradigm", "revolution", or at least the branding of one. Just the same, serverless.

      I will continue to ignore nosql and serverless.

      The industry sure does change, but do you know how much of that is moving in a real direction and how much is a merry-go-round? Let's brand it "Carousel" and raise 10 million. And in 20 years we can talk about serverless being the new hotness, again.

      • mmt 7 years ago

        > Very few handled anything that couldn't run off a beefy postgres installation.

        My impression, from attempting to evangelize scaling "up" before scaling "out" (because it's both cheaper and much lower effort/labor/time) is that vanishingly few programmers have any idea what a "beefy" installation would even look like.

        I routinely encounter implicit assumptions (partially driven, these days, anyway, by what VPS and cloud providers off) that the "largest" servers 2U (or 4U, if I'm lucky) and are I/O limited by the number of disks they an hold in their chassis.

        Similarly, there seems to be a lack of awareness of just how big main memory can be on a single server, even before paying a price premium for higher-density modules.

        Not knowing where the price-performance curve inflection points (for memory and/or CPU) happen to be also seems to be associated with not knowing where the price tops out. It's as if they fear the biggest server they can (and will be forced to) buy will cost a million bucks, rather than $100k.

      • cphoover 7 years ago

        Scale is not just user load, but also scale of application complexity. In my experience when one db connection has access to every resource, in a complex application, this can lead to some really convoluted queries and make schema changes very difficult because of cross cutting dependencies built into these queries, triggers, procedures... etc. This is forgetting about the issues of deadlocks when you have 80 consuming services and applications you don't even know about are opening up all sorts of transactions. Even just splitting the DB into schemas for each resource domain and limiting access per service can help to avoid this.

        Also performance is relative, I've worked on highly trafficked applications that had to support high throughput. I have also worked on applications backed by relational storage where data size and complexity has impacted performance.

        • the_af 7 years ago

          > "Scale is not just user load, but also scale of application complexity"

          In my experience, when people use NoSQL because "the application is too complex for relational DBs" they tend to make a mess of it, NoSQL included. They usually end up reinventing the wheel and re-writing buggy versions of features a RDBMS would have given them natively.

        • zaarn 7 years ago

          I don't think I've seen a deadlock in a long long time on most major DB platforms.

          PG also lets you get very vague about it being an relational DB if you want.

          And tbh, if the size of your table impacts performance, you either don't have a very good DBA or your DBA doesn't know what partitioning is, both good reasons to replace them.

          Most modern DBs don't have any of these issues, PG can cleanly handle live schema changes since it packs those in transactions. Old transactions simply use the previous schema. MariaDB requires a bit more fiddling but Github figured it out.

          And from experience, you're likely not going to hit the scale where you need multiple DB nodes for performance. In 10 out of 10 cases, a simple failover is what you need (but didn't invest in because MongoDB is cooler).

        • SahAssar 7 years ago

          > when one db connection has access to every resource

          So why not use db users to restrict each part to only be able to access the parts it should?

          • cphoover 7 years ago

            Sure that works... I think encapsulation through separate db schemas is generally sufficient. Most people don't start or end up here however. I'm not saying that RDBMS used correctly is a bad thing. I prefer multiple small postgres schemas per "data service" (what I'm calling a service that deals only with data persistence, and updating consumers about changes to data), each schema can correlate to a single resource, or smallest possible domain of the application. These services can publish notifications about updates that can be consumed by consuming downstream services.

            It's my opinion micro-services, should do one thing and do them well, and the data storage that backs these services should only be concerned with the domain of that single-purpose service. It should be isolated from all other concerns.

            Having a separate schema for "users" than for "messages" for example.

            Where to draw those dividing lines is not always easy.

      • Zelphyr 7 years ago

        Very much this. Sooooo many times I hear the cry of "does it scale?" To which I reply, "Does it need to?!"

        At my last company we had a developer question scalability constantly despite the fact that the average customer of an instance of our product had about 200 users.

        • mmt 7 years ago

          I like to add, "does it need to beyond what's delivered by Moore's Law?" (which I use a metaphor for all increases in computing performance, including I/O, which has, of course, increased at a much slower, but far from zero, pace).

          If your CPU utilization from user growth is doubling every 2 years, but so is CPU capacity, then don't worry about it.

      • zzzcpan 7 years ago

        > Very few handled anything that couldn't run off a beefy postgres installation.

        Beefy postgres would get you to 99.9% availability at best, with pretty bad tail latency and would cost quite a bit to operate. As it turns out, very few can actually live with that. And even infamous MongoDB can do better at this than PostgreSQL. Ignorance simply makes your business less competitive.

        • danharaj 7 years ago

          > Beefy postgres would get you to 99.9% availability at best

          This is just false. Shrug.

    • jacques_chester 7 years ago

      > monolithic large relational databases are hard to scale

      DB2 on z/OS was able handle billions of queries per day.

      In 1999.

      Some greybeards took great delight in telling me this sometime around 2010 when I was visiting a development lab.

      > When you have one large database with tons of interdependencies, it makes migrating data, and making schema changes much harder.

      Another way to say this is that when you have a tool ferociously and consistently protecting the integrity of all your data against a very wide range of mistakes, you have to sometimes do boring things like fix your mistakes before proceeding.

      > In theory better application design would have separate upstream data services fetch the resources they are responsible for.

      A join in the application is still a join. Except it is slower, harder to write, more likely to be wrong and mathematically guaranteed to run into transaction anomalies.

      I think non-relational datastores have their place. Really. There are certain kinds of traffic patterns in which it makes sense to accept the tradeoffs.

      But they are few. We ought to demand substantial, demonstrable business value, far outweighing the risks, before being prepared to surrender the kinds of guarantees that a RDBMS is able to provide.

      • cphoover 7 years ago

        Not everything requires pessimistic transactional guarantees or atomicity. The problem domain you are solving for will influence the importance of those guarantees. If I'm solving for something where data consistency is not an utmost priority (tons of applications meet this criteria, including the one you are using now HN.) I don't have to worry about this.

        But when you have transactional guarantees you also lose partition/failure tolerance. So it ends up being a choice of consistency over availability.

        • jacques_chester 7 years ago

          > Not everything requires pessimistic transactional guarantees or atomicity.

          They are easier to give up after the fact than to try to regain after the fact.

          > If I'm solving for something where data consistency is not an utmost priority (tons of applications meet this criteria, including the one you are using now HN.) I don't have to worry about this.

          Sure. But wait for the pain. Prove the business need to relax the guarantees and the business acceptance of the risks.

          > So it ends up being a choice of consistency over availability.

          Total partitions are relatively rare and so disruptive that even if the magical datastore keeps chugging, everything else is mostly boned, so it doesn't matter. Meanwhile people tend to discover that actually, consistency mattered all along, but it's impossible to fix in retrospect.

          Then there's the whole thing of bold claims being made in theory and not delivered in reality. RDBMSes, with the exception of MySQL which is close to being singlehandedly responsible for the emergence of NoSQL in the first place, tend to actually deliver on what they promise. The record for the alternatives is mixed, the fine print varies wildly and tends to leave out important details like "etcd split brains if you sneeze too loudly" or "mongodb is super fast, unless you want your data back".

  • exabrial 7 years ago

    This is anecdotal, and I've read cases of the opposite, so I know there are downvotes incoming.

    I've yet to work on a system where NoSQL was I was like "thank goodness we didn't use a structured database!". Instead, every time it's been the HIPPO trying to defend the decision while everyone else just deals with it. NoSQL seems to be taking a giant loan... You're going to need to organize and parse your data at some point (or why would you keep the data?). Putting that decision into the future just makes it harder on everyone.

    • zaarn 7 years ago

      Schemaless definitely has a few applications, usually systems related to tagging. Luckily you can easily integrate schemaless into your Postgres database with no performance downside all thanks to the magic of JSONB or FDW, depending on which way you swing.

      The very few pure schemaless databases that continue to exist and where I'm convinced they will continue to exist for a long while are those that specialize a lot (ie, Redis, Elasticsearch, a lot of the Timeseries databases).

  • y4mi 7 years ago

    Serverless is at its heart - as I understand it - a dockerized microservice, abstracted away to a degree that the developer no longer has to think about anything but his application code.

    You'll definitely be able to ignore it and it probably won't be used in smallish companies for ages.

    It's just an easier way to to get your application to scale than homebuild docker images were.

    • rhlsthrm 7 years ago

      > You'll definitely be able to ignore it and it probably won't be used in smallish companies for ages.

      Why do you say this? I feel like this would be very useful for smallish companies. I'm running eng for my 3 person startup and looking into using Lambda-based microservices with Serverless for our next project. My goal is to completely minimize devops time for our engineers, as well as reduce cost compared to PaaS services.

    • xchaotic 7 years ago

      "a degree that the developer no longer has to think about anything but his application code."

      a developer still has to understand the implications of resource consumption etc. For performance-critical pieces of code, IMO it's better to have direct access to the hardware - I had a recent first hand experience with this debugging NUMA related performance issue.

      • mcintyre1994 7 years ago

        Is there a performance-critical case that isn't already ruled out by the Lambda cold boot problem though?

      • y4mi 7 years ago

        nor would you use a dockerized microservice for that, or would you?

        serverless is - as i said before - a dockerized microservice as its heart. It should only be used in in places where you'd do it without the abstraction.

        There are a lot of services / applications you can build with this. For example Adapters for external SaaS which should be able to trigger certain actions, or just plain JSON APIs which query a DB and output their results...

        but using it to join 2 TB of data and process it afterwards in realtime? yeah, thats not a valid usecase for serverless.

    • chii 7 years ago

      > no longer has to think about anything but his application code.

      i mean, CGI has always existed. This serverless hype is basically rebrand of CGI with some fancy orchestration around autoscaling across boxes (which, tbh, isn't really that much work, and most people don't need the scale required to make this feasible anyway).

      • mmt 7 years ago

        > isn't really that much work

        I suspect that it's this little parenthetical tidbit, and implicit disagreement with it (or differing definitions of "much") that drives the creation of this kind of abstraction.

        In some situations, I consider the gap to be legitimate, where it may be easy (not that much work) for an expert but difficult for everyone else, and, more importantly, becoming expert is non-trivial, even with training/mentorship from one.

        In other situations, I consider the gap to be merely one of perception/misestimation, either because it would actually be relatively easy for a non-expert who had actually tried, and/or the needed expertise is shallow enough that it can be quickly taught.

        I believe autoscaling is (or at least originally was) of the former category and that the availability of tools and abstractions around it has allowed a broad number of non-experts to leverage the wisdom of a much narrower group of expert practitioners.

        OTOH, I believe running hardware in a datacenter (as opposed to outsourcing it to a VPS or even cloud) is of the latter category. I routinely read comments like "have to hire 5 sysadmins" from non-experts when we experts know that estimate is around 20x too high for a scale of hundreds of servers. Even at higher scale, if hiring is necessary, the hardware-specific skills are easily taught, so junior staff is fine.

  • autotune 7 years ago

    Serverless has its place, that said I'm not sure how well this company is going to end up doing. Might be a bit biased but the most common serverless applications I've seen are all about integrating cloud services with each other, which is done on their own platform.

  • giancarlostoro 7 years ago

    Did not Google already make a cloud platform that is cloud agnostic called OpenCloud which is a better name that doesn't target just "Serverless" aka Azure Functions, Amazon Lambdas.

    The use of "Serverless" is not having to deal with an "IT" guy at all who complains about setting up your app cause you updated the STACK and now it collides with everything else on the same server. Also makes it so you don't necessarily have to use containers.

    • toomuchtodo 7 years ago

      I think the problem is all of this tooling (Docker, Serverless, NoSQL) has been created to “support developer velocity”, which really just ends up as technical debt. You can’t magic away the need for experience and domain knowledge.

      Docker doesn’t replace the need to know how VMs work. Containers don’t magically allow you to scale to infinity (Although k8s shows a lot of potential). And you probably should be using PostgreSQL instead of NoSQL unless you’re absolutely sure you’re smart enough to know why PostgreSQL can’t work for your use case.

      Serverless is great if you want to replace a cron job, the value of the function firing is substantially higher than the cost to run it (“margins are crazy high, optimize to VMs later and ignore the AWS bill for now”), or you’re executing untrusted code in isolation for customers.

      • sbinthree 7 years ago

        I am learning this lesson the hard way with dynamically typed languages on the server. If the documentation and database have to be statically typed, you should really use types in the code. So dynamically typed languages on the server is impossible (?).

        • cphoover 7 years ago

          No offense but BS. What you have claimed is totally unsubstantiated, and not aligned with my experience working on highly-trafficked eCommerce applications.

          If you know what your doing you can write elegant, performance tuned, secure and maintainable code in a dynamic language. I've also seen poorly written code written in statically typed languages.

          It really comes to who is writing the code, what kind of standards they abide to, and their architectural prowess.

          • eropple 7 years ago

            > If you know what your doing you can write elegant, performance tuned, secure and maintainable code in a dynamic language.

            You can! You totally can. But, statistically speaking? You probably won't. Neither will I. And that's why the minimal level of guardrails I'll put up with in 2018 is TypeScript and I'd really rather have better.

            • always_good 7 years ago

              You're rehashing the old dynamic- vs static-typed debate.

              But what the upstream comment said was just wrong: that because documentation and database are statically-typed, then the application must be. It doesn't really make sense. See their use of "impossible".

              For example, your database types or your application types aren't what your API documentation annotates. Your docs annotate your endpoint contracts, not the implementation detail behind them.

              • eropple 7 years ago

                If you take "must" in the pocket-protectory overly-literal way, yeah, sure. But when you act generously and decent and take it to reference what is tenable and acceptable, things take a different turn. And it is in that light that I approach the topic and the speaker, because generosity and decency are pretty good things.

  • Xorlev 7 years ago

    But don't you want perfect elastic scalability for your blockchain^W distributed ledger project? /s

  • Bahamut 7 years ago

    This comment is a little strange - NoSQL is prevalent, especially when dealing with large data sets, or certain problems such as search.

    That said, I’m not sold on serverless.

actionowl 7 years ago

We've been using serverless with AWS lambdas for a few months.

Testing is hard, the more AWS shit you tie yourself to the harder local testing and development becomes. I picked up a lambda project another developer started and asked them how they were testing and developing locally. Answer: They deployed a new update for every change (!?)

Debugging: Look at log files...

Also, at some point serverless added some autocomplete crap to my .bashrc file without asking which I will never forgive them for.

SSilver2k2 7 years ago

Serverless is great, but I am really loving Zappa for Python Flask and Django development with Lambda and API Gateway.

Deployed our first production tool with it and it's been working great.

  • inspector14 7 years ago

    do you have any strong opinions re: differences between chalice / zappa?

    I was looking at these two recently and ended up going with chalice as the docs seemed a bit simpler and more readily accessible.

    • wahnfrieden 7 years ago

      One point: Chalice is non-portable if you outgrow Lambda and want to rehost, whereas Zappa is just Django.

  • gsibble 7 years ago

    Love Zappa.

i386 7 years ago

This looks interesting and I wish them good luck. The problem with any developer tools startup is that no matter how great the product is, the willingness to pay is very low (developers think they can replace you with a small script) and extremaly likely that your market gets slowly eaten from the bottom by open source and/or MS/Google/Amazon fold in your service into their cloud platform.

robertonovelo 7 years ago

I usually do TDD serverless apps by debugging unit tests with jest. Is it a bad practice? Anyone can easily mock events this way, It does not matter whether its a SNS event or an HTTP event. Overall, I have had a great experience with serverless!

nunez 7 years ago

Super happy for them. It's clear that lots of companies and people are interested in serverless, as the benefits are clear. this could be how the "microservice" actually manifests itself in a few years.

neom 7 years ago

Hope DigitalOcean works to make this a first-class citizen on their cloud.

jaequery 7 years ago

i think more than half the websites on the internet can run on serverless platform and that makes the web more secure and faster.

  • zaarn 7 years ago

    I think more than half the websites on the internet can run on a shared hoster for less than 3$ a month and that makes the web more secure and faster.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection