The challenge:

As a division of the Deutsche Bahn Group (DB), DB Regio Bus is subject to the compliance guidelines of the parent company. The compliance requirements provide clear guidelines for managing the IT infrastructure and data. Having a complete and up-to-date overview of a company’s IT infrastructure is particularly important when dealing with a company with various complex business areas. A key component of the compliance guidelines is the creation of an infrastructure register (asset register), which clearly shows which resources DB Regio Bus manages in the AWS cloud.

Since DB Regio Bus ITK maintains its own asset register (source system), it is necessary to synchronize this register with the central register of Deutsche Bahn (destination system).

The implementation of this requirement requires the development of a new process and a new application.

The following challenges arise for the implementation:

  • Separation of staging and productive operation
  • Requirement: Implementation of a batch workload as a fully managed service to ensure minimal burden on operations teams (automated and daily updated execution of the workload)
  • Ensure that the application can be further developed automatically (without much effort from developers)
  • Continuous further development and provision of the application using CI/CD
  • No permanent term (can be used as needed). The runtime only exists as long as a task has to be completed

The solution:

Division into test and productive operation:

In order to ensure the simultaneous further development of the application and at the same time a stable productive operation of the application, the application was divided into two identical but separate environments. While the stable version of the application is being operated in the productive environment, the test environment can be used to further optimize the application.

Automating Docker Images builds to ECR:

The implemented CI/CD pipeline based on AWS CodePipeline, CodeCommit, CodeBuild and CodeDeploy enables fully automated deployments in both environments. Unit and integration tests of the application artifact within the CI/CD pipeline are also automated. After successful tests, a Docker image is automatically created and stored in AWS ECR.

Automatic updates of the ECS service:

As soon as a new image has been created, this new version is rolled out fully automatically using CodeDeploy on AWS Elastic Container Service (AWS ECS). AWS ECS now ensures that the latest version of the application is automatically used on the next trigger (see below).

Move zu Cloudwatch getriggertem Batch Workload:

The workload can be implemented as a so-called batch workload: data is read from the source database, converted within the application according to the requirements and transferred to the target system. After all records have been processed and transferred, there is no reason to keep the application “running”.

For this reason, AWS ECS was chosen as the service is best suited for batch workloads. Based on an AWS CloudWatch event trigger, the application is launched every day at a previously defined time. After all data records have been processed, the application is automatically stopped and only restarted by the next trigger.

This reduces the cost of the implemented solution by about 10 times compared to an architecture based on AWS EC2.

Source: https://www.protos-technologie.de/2022/08/15/optimierung-von-eventgesteuerten-workloads-mit-containern-und-aws-serverless/

Leave a Reply