DockerCon EU 2015 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Use Case [clear filter]
Monday, November 16

11:45 CET

Using Docker and SDN for telco application development and deployment
In this talk we will present how at Bell-Labs (Alcatel-Lucent R&D division) we benefit from using Docker in combination with the SDN solution from Nuage Networks for development and deployment of a next-gen chat-based communication platform. This communication platform does intensive data analytics, runs a number of multi-media services and can control remote appliances (e.g. thermostats, robots, cameras, etc). We will illustrate the stringent telco requirements to successfully operate such a communication platform, including some non-functional needs like high-availability, reliability, elasticity, QoS and lifecycle management. Furthermore, we will explain why we selected docker as a hosting platform and how we have utilized it. We will also share some of the deployment scenarios we are facing and how these are addressed by combining Docker and Nuage VSP. Lastly, we will share the lessons we have learned during this development process, and propose some improvements/extensions for Docker to evolve into an application stack that is able to meet the stringent needs of telco applications. 

avatar for Nico Janssens

Nico Janssens

Senior Researcher, Bell Labs, Alcatel-Lucent
Nico is a senior researcher at Alcatel-Lucent Bell Labs, currently working on a new communication and collaboration service. Before joining Alcatel-Lucent Bell Labs in October 2009, Nico worked for a start-up company developing a laboratory information management system for High Content... Read More →

Monday November 16, 2015 11:45 - 12:30 CET
Level 1, Room 114

14:00 CET

Using Docker with NoSQL

Understanding internal operations is crucial in financial services. Are public interfaces running smoothly? Are the back-end business systems as productive as they could be? Are infrastructure resources being allocated correctly based on business need? These are exactly the kind of questions that organizations must be able to answer but, surprisingly, they struggle with these questions. Called The Salamander, the tool has provided the bank an unparalleled ability to optimise and simplify business IT processes, which ultimately saves costs and leads to an improved customer experience. The Salamander team designed a solution running on a cloud computing architecture with several NoSQL solutions as Neo4J, MongoDB, REDIS,.... With these repositories they generate data visualisations that clearly demonstrate the relationship between among operations. The front-end and back-end of the application communicate via RESTFUL APIs and NodeJS-based servers provide elasticity when accessing the stored data. The new challenges are related with the need of load and use specific result sets stored and useful to diagnosis but not for immediate reading. At this point appears Docker appears as a solution to offer fast and easy custom database service. 

avatar for Manuel Eusebio de Paz Carmona

Manuel Eusebio de Paz Carmona

Software Architect, BEEVA (a BBVA Company)
Software Architect and developer at BEEVA specialized in Node.js and MongoDB Certified Developer. Enthusiast of Open Source and Cloud Computing. His last projects are mostly related to data visualisation on client side technologies as MEAN stack, D3 & NVD3.Manuel tweets at @manu... Read More →

Monday November 16, 2015 14:00 - 14:45 CET
Level 1, Room 114

14:55 CET

Swarming Spark applications
 We built Zoe, an open source user-facing service that ties together Spark, a data-intensive framework for big data computation, and Swarm, the Docker clustering system. It targets data scientists who need to run their data analysis applications without having to worry about systems details. Zoe can execute long running Spark jobs, but also Scala or iPython interactive notebooks and streaming applications, covering the full Spark development cycle. When a computation is finished, resources are automatically freed and available for other uses, since all processes are run in Docker containers. 

In this talk we are going to present why Zoe, the Container Analytics as a Service, was born, its architecture and the problems it tries to solve. Zoe would not be there without Swarm and Docker and we will also talk about some of the stumbling blocks we encountered and the solutions we found, in particular in transparently connecting Docker hosts through a physical network. Zoe was born as a research prototype, but is now stable and is currently being used to run real jobs from users in our research institution. Application scheduling on top of Swarm and optimized container placement will also be covered during the presentation. 

avatar for Daniele Venzano

Daniele Venzano

Research Engineer, EURECOM
Daniele Venzano works as a Research Engineer in the Distributed Systems Group at Eurecom in Sophia Antipolis, southern France, since 2013. His main focus is virtualization technologies with an eye to optimizations for data intensive frameworks like Spark and Hadoop.Before he was part... Read More →

Monday November 16, 2015 14:55 - 15:40 CET
Level 1, Room 114

16:25 CET

Placing a container on a train at 200 mph
At Uber, we've been introducing Docker to give service owners more control over their environments. However, everything at Uber is moving very fast so we have had to do it a way such that Docker fitted into the existing infrastructure and services could be migrated seamlessly to Docker without any service interruptions. In this talk we will talk about the challenges we faced while doing this, such as handling both non-Docker and Docker builds, image replication, integration with our deployment systems and other challenges when deploying Docker at scale.

avatar for Casper Svenning Jensen

Casper Svenning Jensen

Software Engineer, Infrastructure, Uber
Casper is a Software Engineer at Uber, working on all things Docker as well as Uber's deployment and cluster management system. Before Uber, Casper attained his PhD degree at Aarhus University, working on automated testing of web applications.

Monday November 16, 2015 16:25 - 17:10 CET
Level 1, Room 114

17:20 CET

Finding a Theory of the Universe with Docker and Volunteer Computers
Cosmology@Home is a project which uses volunteer computing to analyze cosmological data and answer questions about our universe such as "how much dark matter is there?" and "under what conditions did the Big Bang occur?" We recently began using Docker by taking each job which we would normally send to our volunteer computers, and packaging it up inside a Docker container. The volunteer computers themselves come from interested users all over the world who download and run the software allowing them to become volunteers (called BOINC). The system is working exceedingly well and using Docker has made it massively easier for us to develop and run it. I will explain some of the technical details of the implementation, which involves a customized boot2docker ISO, as well give a brief summary of the scientific questions we are trying to answer and how these results made possible by Docker are helping analyze data from, e.g. the European Space Agency's Planck satellite. 

avatar for Dr. Marius Millea

Dr. Marius Millea

Institut Lagrange de Paris
Dr. Marius Millea is a cosmologist and postdoctoral fellow at the Institut Lagrange de Paris. He is the main developer of Cosmology@Home, which is using Docker and volunteer's computers all over the world to answer questions about our universe. Dr. Millea has been interested in putting... Read More →

Monday November 16, 2015 17:20 - 18:05 CET
Level 1, Room 114
Tuesday, November 17

11:15 CET

Trading Bitcoin with Docker

Bity is an internet money gateway built by Swiss Bitcoin Exchange ( SBEX ). To trade bitcoin the entire infrastructure of Bity is running in Docker containers. All the components of the infrastructure are using Docker, from the frontend applications and load balancer, the Django based backend, replicated Postgres database, Bitcoin daemon and remittance engine. All software goes through a CI pipeline that starts with Docker images being built on private repositories in Docker hub. Developers take also advantage of a docker-compose definition that allows them to run the entire infrastructure on a single laptop. Finally the production deployments happen thanks to the Ansible Docker module on a CloudStack based public cloud. Everything has been automated to ease re-deployment and operations. This presentation will go through every component and how Docker has enabled us to go production in 4 months. 

avatar for Sebastien Goasguen

Sebastien Goasguen

VP, Apache CloudStack
Sebastien is a senior open source architect, member of the Apache Software Foundation he is the current VP of Apache CloudStack and member of the Libcloud PMC. He has 15 years of experience in distributed systems from high performance computing, to clouds and now container orchestration... Read More →

Tuesday November 17, 2015 11:15 - 12:00 CET
Level 1, Room 114

13:30 CET

Continuous Integration with Jenkins, Docker and Compose

Oxford University Press (OUP) recently started the Oxford Global Languages (OGL) initiative (http://www.oxforddictionaries.com/words/oxfordlanguages) which aims at providing language resources for digitally under represented languages. In August 2015 OUP launched two African languages websites for Zulu (http://zu.oxforddictionaries.com) and Northern Sotho (http://nso.oxforddictionaries.com). The backend of these websites is based on an API retrieving data in RDF from a triple store and delivering data to the frontend in JSON-LD. 

The entire micro-service infrastructure for development, staging, and production runs on Docker containers in Amazon EC2 instances. In particular, we use Jenkins to rebuild the Docker image for the API based on a Python Flask application and Docker Compose to orchestrate the containers. A typical CI workflow is as follows: 

- a developer commits code to the codebase 
- Jenkins triggers a job to run unit tests 
- if the unit tests are successful, the Docker image of the Python Flask application is rebuilt and the container is restarted via Docker Compose 
- if the unit tests or the Docker build failed, the monitor view shows the Jenkins jobs in red and displays the name of the possible culprit who broke the build. 

A demo of this CI workflow is available at http://www.sandrocirulli.net/continuous-integration-with-jenkins-docker-and-compose 


avatar for Sandro Cirulli

Sandro Cirulli

Platform Tech Lead, Oxford University Press
Sandro Cirulli works as Platform Tech Lead in the Dictionaries department of Oxford University Press (OUP). Since 2012 he has been involved in several projects at OUP, including Oxford Global Language Solutions (http://oxfordgls.com/) and Oxford Global Languages (http://www.oxforddictionaries.com/words/oxfordlanguages... Read More →

Tuesday November 17, 2015 13:30 - 14:15 CET
Level 1, Room 114

14:25 CET

The Glue is the Hard Part: Making a Production-Ready PaaS

Docker is an amazing technology. In particular, its build-once-run-anywhere model unlocks the world of cluster schedulers like Mesos and Kubernetes. These solve many of the problems of running high-scale websites, but introduce new challenges that need addressing. 

In this talk, Evan will describe PaaSTA, a PaaS built on top of open­ source tools including Docker, Mesos, Marathon, and Chronos. PaaSTA provides tooling for developers to quickly turn their microservice into a monitored, highly available application spanning multiple datacenters and cloud regions. Evan will give an overview of the open-source technologies that power PaaSTA, discuss how Yelp has glued these together to give developers control without burdening them with the complexities of the infrastructure, and show the workflow used by developers to update and maintain their services on PaaSTA. 

avatar for Evan Krall

Evan Krall

Site Reliability Engineer, Yelp
Evan envisioned a grand Docker future for Yelp in 2013, and has been working to make it a reality ever since.Evan tweets at @meatmanek.

Tuesday November 17, 2015 14:25 - 15:10 CET
Level 1, Room 114

15:55 CET

It's in the game: The path to micro-services at Electronic Arts with Docker
Learn how Docker can be used to achieve near bare-metal performance and a scale-out architecture that enables game backends to scale and stay responsive during load spikes. Game popularity can change with every feature and content pack release, and IBM and Electronic Arts have transitioned a mobile game engine to leverage Docker to enable rapid rollouts while handling more game users. In this session you'll learn design tips from the development of this next-gen gaming platform in an industry where user loyalty and performance are everything. Docker packaging of the game services is enabling a transition to a more flexible, micro-service based architecture, and this session will discuss the development lessons learned during that transition as well as the transition to using Docker in production. 

avatar for Andrew Hately

Andrew Hately

CTO Cloud Architecture, IBM
Andrew Hately is an IBM Distinguished Engineer and CTO of IBM Cloud Architecture. He’s currently working to define IBM’s Open Cloud architecture strategy using Docker, OpenStack, and CloudFoundry. Andrew also leads a team of open source developers, encouraging them to build a... Read More →
avatar for Scott Porter

Scott Porter

Senior Developer, Electronic Arts
Scott Porter is a senior developer at Electronic Arts with a wide range of experience within the virtual worlds and games industry. He has been leading the transition to micro-services in the mobile game backend at EA Firemonkeys and is the lead architect for Docker and containers... Read More →

Tuesday November 17, 2015 15:55 - 16:40 CET
Level 1, Room 114