OUR THIRTEENTH MEETUP
Thursday, May 9th, 6pm at Tramshed Tech, Cardiff.
For this epic meetup we joined forces with AI Wales and AWS South Wales User Group. For the action packed evening we had:
We have to say a massive thanks to DevOpsGroup for supporting us and helping us grow the Cloud Native Community in Wales for over a year! They have provided us with a stunning venue for our meetups. All the drinks and food provided by them also goes down well! Head over to their careers page to check out exciting vacancies.
The organisers of AI Wales and AWS South Wales User Group helped a lot in putting this evening together. So a special thanks to Jaymie Thomas, Matt Lewis, Toby White and David Pugh.
Thanks also to DevOpsGroup, Yolk Recruitment, Tramshed Tech, Artimus and Mobilise.Cloud for sponsoring the evening.
Free Digital Apprenticeships in Cardiff
Louise Harris, founder and director of Tramshed Tech and Big Learning Company spoke about new digital apprenticeships that are available for free. They are:
- Digital Application Support Level 3
- Digital Learning Design Level 3
- Social Media & Digital Marketing Level 3
- Information Security Level 3
- Data Analytics Level 4
Keep an eye out on Big Learning Company for announcement soon!
Scale Machine Learning from zero to millions of users
Julien is a Global Technical Evangelist for AI & Machine Learning at AWS. Julien spoke about how to train machine learning models and how to take them from development to production. An extremely labour intensive task which seems to be a pain point for many teams running ML models in production.
Advice no 1: Avoid ML if you can! See if your needs are satisfied by calling an existing API by a cloud provider that has pre-trained models. Save yourself a whole lot of pain!
If not, enter ML
Working yourself you would train the model on your machine and after you have tested it you will end up deploying it on a Virtual Machine in the cloud. Good points: simple setup. Not so good points: does not scale, manual work, monolithic architecture. Julien then showed a demo of running an EC2 instance (AWS Virtual machine) with AWS Deep Learning AMI.
More customers, more team members, more models
Scalability, high availability & security are now a must. You opt to scale out and implement all the good DevOps practices, Infrastructure as Code, Continuous Integration, Continuous Deployment etc. Julien presented the following three options:
1: More Virtual Machines
First option you have is more Virtual Machines. Good points: might give you the scaling. Not so good points: Infrastructure setup requires a lot of effort.
2: Docker Clusters
You can containerise your models using Docker and run them either on Elastic Container Service or Amazon Elastic Container Service for Kubernetes (Amazon EKS). Gives you options on scaling however it is still not fully managed so you will have to maintain the services. We then got to see a demo on ECS cluster that was running Tensorflow training and prediction.
SageMaker is AWS’ fully managed machine learning service. This service allows data scientists to build & train ML models and deploy them to production ready fully managed servers at any scale. We got to see SageMaker in action. This solution requires no infrastructure with minimal setup effort for ML.
Julien concluded with his thoughts saying that you should implement a solution that works for your requirements and should not over-engineer it for the future.
If you would like to learn more, head over to:
Jack Kelly is a graduate engineer at DevOpsGroup. Jack gave a lightning talk on Rust. He spoke about its in increasing popularity in the tech community and highlighted a number of project on the CNCF Landscape that are use Rust.
Jack highlighted some of the features of the languages such as performance and reliability and how it compares to other languages.
Jack also announced the newly formed Rust & C++ Cardiff meetup! Head over to their meetup page, join and show them some love!
Head over to Jack’s GitLab & GitHub repos to see the projects that he is working on.
What is happening when you see two men, both named James, crossing their arms over their chests to make ‘X’ sign:
That’s right, it’s Jenkins X time!
We were lucky to have James Strachan and James Rawlings), co-creators of Jenkins X. Jenkins X is a next generation native Continuous Integration/Continuous Deployment platform for Kubernetes. It should be noted that James Strachan is also the creator of Apache Camel and Apache Groovy. The latter language is most commonly dubbed as the grooviest language of them all!
James R spoke about the history of Jenkins that there are around 200,000 Jenkins servers running with 15,000,000 Jenkins users. Great tool, however it has some limitations. It can be a single point of failure, the server is memory intensive and is always running. Scaling jobs can lead to issues in Jenkins.
The James’ mentioned that if you have worked with Kubernetes, you will have realised that Kubernetes deployments are not straight forwards. Jenkins X is created to abstract Kubernetes and even deployment pipelines. So as a developer, you can focus delivering business functionality.
James S talked about how according to 2018 State of DevOps Report, high performing teams make full use of CI/CD to deliver faster to market and spend less time remediating issues.
That is where Jenkins X steps in to automate your CI+CD, so you can focus on building applications. With Jenkins X all your artifacts are in version control and you can use trunk-based development.
James S & James R showed Jenkins X in action. Jenkins X can be installed on an existing or new Kubernetes cluster.
To show how easy it is to create an application and deploy to Kubernetes cluster, James R ran the following command:
$ jx create quickstart
With a single command, James was able to:
- create a simple node application
- configure github repository for it
Dockerfile for the app
Jenkinsfile to implement CI/CD
- Helm Charts for Kubernetes deployment
As soon as James pushed the code to Github repository, it deployed to Kubernetes cluster and provided a preview link. You can see the repository here.
James then made a change locally, created a pull request which kicked off a deployment and provided a preview link. Idea being that the team can review and if happy, merge it to a branch of their choice. This would then automatically kick-off a deployment to their production Kubernetes cluster. We got to see how Jenkins X implements a GitOps approach. Some seriously good stuff here!
Jenkins X is an Open source project and both James encouraged everyone to join the community, learn and contribute to the project.
If you would like more information head over to the following links:
And it seemed like that (pretty much) everyone caught the Jenkins X fever:
We were able to give away the following prizes:
We would like to thank JetBrains for their continued support.
We would also like to thank Skills Matter for providing us with tickets to their awesome conferences for every one of our meetups. Special thanks to Carla, Sam & Nicole from Skills Matter for their support!
Feedback / Content
If you would like to:
- Give a talk
- Get more information regarding the Meetup
- Talk about sponsorship
- Any other suggestions or support
Please drop us a message on twitter @CloudNativeWal or email us on firstname.lastname@example.org