Delivery Camp 7.0
- delivery camp
A total of 27 people from across the company took part. We divided into several teams that worked on predefined topics for three days. The aim was to try out innovative technologies, cooperate with people we normally do not meet on projects and to think innovatively. All following our values and goals in Lundegaard.
In addition to the hard work on the projects, we also made full use of the leisure activities offered to us by the Vletice Farm. So it was time to relax in the sauna or in the jacuzzi, to playing games together, have a good coffee, beer or cocktails.
“This year's premiere DeliveryCamp, formerly known as a separate terms like DevCamp and DXCamp, has set a significant goal: Bring the partial essences of our Delivery by Lundegaard into a multi-cross-team event to explore our strengths, meet colleagues from other perspectives and disciplines and especially focus and make space for concentration on our common idea and goal,” the main organizer of the event Tomáš Plecháč explains.
Each topic had its own mental leader, who oversaw the observance of his/her team's goal and making sure that the topic correlated with Lundegaard's goals. Have a read of the detailed descriptions in the selected dCamp topics.
As the name suggests, this team was engaged in device voice controls. The aim was to test a tool that allows voice input as well as regular clicks or taps. At the same time, we wanted to design and conduct an interview.
After creating a prototype in Adobe XD, we uploaded all three proposed situations to Amazon Echo, which in later versions also supports simple graphical output. Thanks to this we prepared our prototype for displaying simple information.
In this team, the goal was to create narrow artificial intelligence that surpasses human capabilities. Specifically, create an artificial intelligence agent who can play Othello (or Reversi) and defeat a human. We decided to use a method, which is increasningly more popular, Reinforcement Learning. This machine learning method is based on the agent's interaction with the environment. It strives to maximize the cumulative reward and is rewarded or penalized for each of its actions.
Although we simplified the game from the original 8x8 fields to 5x5, so the game acquires about 870 billion potential states, which was computationally out of our capabilities. We started calculating the model overnight, but it was only about 70% successful against a random player. So the next step was to create a deep neural network (DNN) agent. We were able to speed up the training the model quite a bit and also clearly beat the random player (success rate 90-95% in 5x5 fields and 87% in 8x8 fields). In addition to human agent games, we also tested artificial intelligence with the traditional minimax algorithm. AI was able to compete in a smaller field, but no longer in a larger field. That is patch to improve AI learning, as to learn from these advanced games, but there was no room for it within the dCamp, and thus is an incentive for further improvement.
MOBILE DEVELOPMENTS IN REASONM
This team decided to move the love for functional programming and mobile development a little further. So far in Lundegaard, we have based mobile applications on React-native technology and spiced it up with functional jargon in the form of ReasonML. This technology is only one of the few possibilities how to develop mobile applications in functional paradigm. In addition, we had affinity for the historical link with React as such. Its original version was written in Ocaml (ReasonML is its alternative syntax).
In addition to the mobile application, a rest API was created to generate the documents needed to record business trips. Here we have decided to follow the known path of Node.js.
The pleasant environment of the farm Vletice and its surroundings gave us appetite for other projects with ReasonML.
MOBILE FLUTTER DEVELOPMENT (MEETING COSTS APP)
As the second mobile team, we decided to explore an alternative to the preferred React Native. But we went in the opposite direction and chose Flutter technology powered by object Dart, because it has resonated in recent years in developer circles and is the main "competition" for RN. Moreover, we knew from previous surveys that Flutter excels in areas where RN has the greatest deficiencies.
For research, we devised a simple application to calculate the cost of meetings. It allows you to enter meetings with the number of participants and according to the average MD rate shows the costs in real time. As a very simple backend, we used firebase, for which Flutter has great support (both Google technologies) and thus allowed us to implement a fast database with real-time data recovery via websockets with minimal time.
We were unable to bring the app to our full satisfaction, but we decided not to let it die and push it to the real store release.
In conclusion we can say that Flutter makes working with AI very fast - in one day we managed to assemble the basic view of the application. Animations work significantly better than in RN. Thanks to InteliSense, most of the code was created by just pressing enter, but in general Dart is a bit confusing and we agreed we wouldn't want to keep a big project in it. Tis may not be as important for a mobile app, however.
MACHINE LEARNING - TOOL EVALUATION
The mission of this team was to analyze or test technologies enabling automation or simplification of work with Machine Learning models. Summary: The current situation divides the interim technology solutions into the following categories:
- Complex solution based on Kubernetes or even paid services (currently uninteresting segment, often dealing with image processing or technologies are useful for big data machine learning problems)
- Pipeline bound to only one algorithm or ML library (vendor lockin)
- Solution useful only for smaller segment of our issue (model versioning, ML prediction via REST)
As part of the evaluation, we tested the open source MlFlow product. The solution currently offers varied support for deploying models of different projects (H2O, TensorFlow, PyTorch ..) and flexible entry of metrics or artifacts for each model. As part of testing, we performed a test use case and worked with metrics and artifact registration. Within the FastAI project, this technology could address historical entries in model creation and training for re-training and back-analysis. The technology itself is relatively simple and does not bring much complexity to the project. At the same time, it offers flexibility and requires preparation for its adoption by the team (modeling must take into account the use of MlFlow).
Conclusions: MlFlow does not solve the whole problem of deploying ML models but simply and elegantly offers their versioning and historical view of their creation (hyper-parameter tuning)
SASSDOC - AUTOMATICALLY GENERATED DESIGN SYSTEM
In this team, coders have took the task of examining automated documentation for SassDoc styling. The goal was simple: to launch and customize SassDoc for the needs of our projects, where it will serve as a centralized source of information for editing and maintaining UI components. Quality and systematic documentation is not only a guide for developers, but can also serves as a solid basis for design system.
SassDoc is no novelty in the world of front-end. No codebase and SassDoc architecture can extend the high-quality tool for detailed project documentation to an automatically generated (technical) design system. We were truly surprised by the simplicity in the scalability of the whole SassDoc and plenty of documentation viewing capabilities. In conclusion, we can say that the goal was met, the beer was drunk and Slávka won a draw with Barca ... deliveryCamp did its magic.
Tomáš adds: “In the spirit of the casual atmosphere, yummy food, inspirational drinking, rural environment and thanks to the enthusiasm our colleagues plunged into their roles with, I can say on behalf of all that we are looking forward to next year 2020 together.”