A seasoned Backend Engineer with 5+ years of
experience in the industry, specialised in AI and microservices related tech stacks. A friendly yet
hard-working individual with a passion
towards Computer Science, Especially interested in learning and practicing cutting-
edge technologies. Let's start
scrolling
and learn more about me.
Transitioned to H2O.ai with a focus on expanding expertise in AI-related technologies while
contributing as a Backend Engineer
Eliminated a critical backend bottleneck by introducing a Kafka broker into the
architecture. Significantly improving document processing time by 100% (when processing 1000
documents parallely)
Enhanced the AI library within Document AI by implementing new features, resolving some
major issues of one of our main customers of the product
Worked closely with customers to gather their requirements, conducted in-depth analysis, and
presented comprehensive proposals to the team for addressing their needs effectively
Recognized as a top-tier employee, ranking in the top 5% of the organization, for
outstanding contributions to projects and team leadership.
Played a pivotal role in the development and maintenance of Choreo Console and DevOps
Portal, a suite of microservices deployed in Kubernetes, owning 2-3 main critical services
including the IDP and some major services in the backend.
Designed and implemented a self-recovering system to facilitate the migration of live,
deployed applications (approximately 10,000 applications), conducting in-depth failure
analysis to ensure seamless transition and minimal disruption.
Led a team of 3-4 developers in the design and implementation of core microservices for the
Choreo backend, from architectural planning to deployment, ensuring adherence to best
practices and scalability.
Developed an Istio mixer adapter to enhance the mesh observability features in Cellery, a
cutting-edge product. This involved developing an adapter alongside an agent to seamlessly
receive, persist, and feed data into Siddhi, a powerful stream processor by improving the
Cellery mesh observability features by 50%
Contributed to feature development related to Spring Boot, enhancing the platform’s
capabilities by integrating Spring Boot components into the existing microservices
architecture, improving modularity and ease of maintenance.
Introduced a framework for contract testing and a Docker image for running tests against
live endpoints, preventing about 10% of bugs going to production.
Delivered a feature to convert AsyncAPI specifications to syntax trees and subsequently to
Ballerina code segments. Nearly 30% of Ballerina users at that time were urged to test this
new feature.
Joined as one of the initial team members (4th member, including the 3 founders) in a
startup environment, witnessing and contributing to its growth from a team of four to thirty
employees.
Mentored and guided junior team members during their internships, fostering a collaborative
and supportive learning culture.
Spearheaded the development and delivery of "Fit App," a comprehensive solution automating
scheduling, measurements, comments, photo uploads, and other functionalities within a
three-month timeframe.
Successfully deployed Proof of Concepts (PoCs) in three major hotels, including the
award-winning "iStay" mobile application and web interface, streamlining internal workflows
such as guest registration, restaurant orders, and billing.
Flutter
Java
Android
PHP
React
MongoDb
Firebase
Firestore
SQL
Freelancer
Jan 2016 - April 2019 | Software Engineer
Leveraged freelance platforms to secure and manage projects, specializing in the
development of mobile applications and web interfaces for diverse clients.
Successfully delivered customized solutions tailored to the specific needs and
requirements of each client.
Gained invaluable insights into the software industry, honing technical skills and
cultivating a strong motivation to pursue a career in software engineering.
This experience laid the foundation for my subsequent roles and provided a solid
understanding of industry dynamics and best practices.
Complete document processing product - Users can OCR documents, annotate and label data
sets, train models, publish those models as pipelines and score documents
This is a bit complex and a big project. We have implemented our major back-end services
with Golang. We have separate UIs for model training and document scoring.
There is one service which talks to the Argo server through the Argo Golang client. That
service is responsible for executing, managing and reporting Argo workflows and their
statuses. There is another service which talks to a Kafka broker and deploying pipelines
(Helm charts) through Helm Golang client. One of the other services is responsible for
communicating with the Kubernetes API server to manage deployments and services. I have
designed and implemented features in all of these services.
When I was joining there, there was a major performance bottleneck in the backend due to a
proxy service we used. That service was stateful, persisting data in its memory, so that
service was not scalable. We solved that using a Kafka broker to handle the states, making
the service stateless to scale that up. This increased the performance to its maximum. We
benchmarked the progress by ‘seconds per page’ (this is a document processing product). This
improved document processing time by 100% (when processing 1000 documents parallely)
I developed features covering all of the micro services deployed, including the front-end
written with React, AI library written with Python and also the back-end services written in
Golang
I have closely worked with Argo workflows. We are using the Argo Golang client to
communicate with the Argo server. Once, I had to implement another side car to be attached
to the workflows to eliminate a bug in Argo workflows. This was a bug in Argo workflows when
that is being used with LinkerD service mesh -
https://github.com/HasithaAthukorala/argo-sidecar
I worked closely with the customers we have for this product. They have an ongoing live
environment. They come sometimes with feature requests and sometimes with bugs. So I had to
gather their requirements, conduct in-depth analysis, and present comprehensive proposals to
our team for addressing customer needs effectively.
There were two main components of the system - Dataplane and the Controlplane
Paid users could connect their own Kubernetes clusters as private dataplanes to keep their data
safe
Front-end UI is implemented with React, I have worked there to implement the UI of some of the
features I designed
I owned, designed and implemented some of the major services of the product.
Rudder - back-end service to interact with the Kubernetes API server to manage all the
deployments, services, ingresses, configs, secrets, etc. done by the users/customers of the
product
IDP - Identity server of Choreo, We deployed WSO2 Identity server along with some custom
extensions I implemented with Java. I added the Anonymous Login implementation there and owned
and maintained the service
I did a huge migration there in the production environment. When a user registers to the system,
he will be creating an organization. Under that organization, users can create projects. Before
the migration an organization was mapped to a namespace in Kubernetes. But, according to the
changes that came within the product design, we needed to map a namespace to a project. There
were around 10,000 deployments in our live production environment. During the migration I had to
create new namespaces and migrate all the config maps, secrets, deployments, services, ingresses
and etc. to the new project-based namespace from the organization-based namespace. I implemented
a new service to do that asynchronously through the Golang Kubernetes client. I did an in depth
failure analysis to identify all the possible failure points. Using those failure points, I
defined a set of states where the application falls within the migration. I had fallback plans
to execute when an application has fallen into an error state. Ultimately, I could migrate
around 10,000 deployments successfully.
Integrated a WebSub to listen to the events coming from some of the event APIs we used such as
Github, Slack and Asgardeo.
Code-first approach to building, integrating, running and managing composite microservice
applications on Kubernetes along with the Cell architecture
I contributed to this project while doing an internship at WSO2
We had implemented some custom resource definitions to introduce a new kind to Kubernetes - Cell
I mainly worked in the Cellery Observability area. We used Isitio as the service mesh along with
the product. Istio will be automatically installed to the cluster along with the product
installation. We had our own user interface like grafana to let the users see the Cells in the
cluster. We fetched telemetry data from Istio and fed them to a stream processor called Siddhi
to process and store data to be used by the user interface. I implemented a custom adaptor for
Istio to fetch and transfer data to Siddhi. Then I implemented a service as an agent to fetch
data from Istio and then feed them to the data processing service (Siddhi) after doing some
modifications like attaching metadata to the dataset. Then I implemented a custom post processor
on top of Siddhi to process telemetry data. I had to do a deduplication process to filter out
duplicate traces. At that time, when a request is sent to service B from service A, through the
Istio adapter we received both of the tracing data from service A and service B, which caused us
to see them as two separate requests. Hence I added a deduplication post processor to Siddhi to
merge them into one and treat that as one request.
https://github.com/wso2/cellery https://github.com/wso2/cellery-observability
I designed and implemented a tool to generate Ballerina code templates from a given AsyncAPI
spec. Analyze the spec, generate the syntax tree and then generate the code from the syntax
tree. We used some popular event APIs as the example AsncAPI specifications set such as Slack,
Github, Google sheets/drive/calendar, Hubspot, Twillio and Shopify. I used the clean
architecture to design this repository -
https://github.com/ballerina-platform/asyncapi-tools https://github.com/ballerina-platform/asyncapi-triggers
Developed some features for the Low code editor in Ballerina which support the conversion of
flow charts to the Ballerina code segments
I joined as one of the initial team members (4th member, including the 3 founders) in a startup
environment. When I was resigning, there were about 30 employees. I could see and contribute to
the growth of the company a lot. I could see how it was growing with the effort we took which
was a huge satisfying feeling
Maia - Patient management system for psychiatrists in New Zealand, A comprehensive system that
helps psychiatrists to get closer and help patients, keep track of the prescriptions and gather
and use medical data complying to the regulations.
iLogCancer - Patient management system for oncologists in Norway - Comprehensive system
that helps oncologists to get closer and help patients, keep track of the prescriptions and gather
and use medical data
We could win an award for a mobile application we implemented for the hotel industry. We could
deploy the PoCs to some selected hotels.
We made a product (including a mobile application and web interface) for the apparel industry,
for a specific customer we had, to automate the process of taking measurements, scheduling fit
sessions, uploading photos, etc.
Flutter
Java
Android
PHP
React
MongoDb
Firebase
Firestore
SQL
Other
Contract tests runner - When using a lot of microservices, sometimes, the front-end won’t know some small improvements of the back-end endpoints. Hence, the front-end gives a contract to the back-end more like an OpenAPI specification. This runner checks the validity of the given contract upon a new release of the back-end services and recognizes the places where the contract is broken by the back-end endpoint.
https://github.com/wso2/contract-test-runner
Java
Feel free to contact me, Thank you.
hello@athukorala.me / hasithaisuru@gmail.com