Lessons Learned from Benchmarking Serverless Workloads

3 min read Original article ↗

Transcript

  1. Serverless: Quo Vadis? •  We hope our serverless workload generator

    is useful tool for anybody looking on serverless services and want to compare them in more depth. •  Planning to write blog post about the experience of serverless workload generaIon and benchmarking and open source the benchmark. If you have some quesIons, feedback, or want to tell me where I am wrong? Aleksander Slominski @aslom (tweet me!) aslom email at us.ibm.com

  2. Serverless Workload CharacterisIcs •  Serverless workloads can require thousands of

    concurrent short lived containers to be created and destroyed in milliseconds: •  Container aka AcIon aka FuncIon aka …. – Depends of servless service, framework, ... •  Required operaIons: – Start lot of acIons – Generate work: start, do work, stop and repeat – AcIons run for some Ime to allow for reuse (cold vs. hot) and must be fast!

  3. Serverless Workload Benchmark Goals •  Simulate lifecycle of serverless acIon

    as it takes part in serverless workload •  Minimal scenario: –  Test serverless acIon start Ime –  Send 1 .. N requests and validate response –  Pause / Resume acIon as needed –  Stop (kill) acIons •  Set scenario of how many acIons are started, when, for how long etc. •  Workload runs mulIple scenario (in sequence, parallel etc.) •  Gather staIsIcs about workload execuIon –  Enough to learn how well test environments are handling high such scenarios?

  4. Simple Setup Scenario Setup: Docker •  Start test driver container

    when it starts running it opens listening sockets and starts S scenario containers –  docker run driver –e setup_for_scenario_containers) •  Each scenario container connects using websocket to test driver and starts A acIon containers –  docker run scenario –e setup_for_acIon_containers –e WS_CALLBACK=ws://test_driver:port) •  Each acIon container when started connects using websocket back to scenario container to ask for requests –  docker run hello-acIon WS_CALLBACK=ws://scenario:port)

  5. Simple Scenario ExecuIon ExecuIon: •  The test driver container a`er

    starIng S scenario containers waits on a websocket for results from scenario containers •  Each scenario container a`er starIng A acIon containers waits on a websocket from an acIon containers and then starts sending N requests and waits for responses •  Each acIon containers a`er starIng sends “ready” over websocket and then waits for requests, processes each request (sleep for M milliseconds) and sends response back End result: •  1 + S + S*A containers running (driver container + scenario containers + acIon containers) •  N * S *A requests processed •  DuraIon: ideal Ime (zero startup Ime): N * M milliseconds

  6. Lessons learned •  Scaling becomes harder as size increases – 

    We can run easily 100s but run into issues when running 1000 containers •  LimitaIons in Docker engine –  It seems we hit some limits on how many processes can be started per second –  Different in different versions of Docker •  Locking in Swarm –  Experienced with Mesos (different Iming of some operaIons leads to deadlocks ….)

  7. More Results for MesosCon EU •  We will present our

    experience running workload experiments in Mesos, share Mesos tuning Ips and workload generaIon code to make Mesos an ideal plalorm for serverless workloads: •  hjp://sched.co/7opW Wednesday, August 31 2:00pm - 2:50pm Amsterdam