Our Solutions Archive | IT Svit

Using Hyperledger to secure medical records

Secure storage of medical records is a nightmare for any person involved. Paper records get lost, digital medical records are hacked (like these 100 million records hacked in 2015). This is why ensuring the security of medical records is a challenge of utmost importance.

The problem

We needed to deliver a highly reliable solution for storing the medical records, which had to provide granular access control, rapid transactions with low cost and immutability of records. The blockchain technology lended a hand and Hyperledger was uniquely suited for the task.

The solution

Hyperledger is a coin-free blockchain ledger that provides RBAC features out-of-the-box and can perform up to 100,000 transactions per second. Due to that, this ledger is perfect for ensuring the secure storage and granular access to sensitive data, such as the medical records.

Secure_Medical_Records_With_Blockchain_ITSvit_2

Using Hyperledger ensures the only people that have access to the data are the patients, their doctors and certain other appropriately authorized medical personnel. Only the doctors can add the records. For example, if the patient suffers a car accident far from home and will not be treated by his family doctor, the appropriately authorized medical personnel of the hospital will have the access to his medical records in order to be able to treat his injuries.

In addition, while using our blockchain solution the doctors fill in the medical records of diseases and vaccinations using the codes from the international classification and the ledger stores only the hashes of these codes. When this information is needed, the hashes are read and the full information is retrieved from external oracles through an API.

The result

Secure_Medical_Records_With_Blockchain_ITSvit_1Using Hyperledger ensures the secure storage of medical records and absolutely free transactions with strict RBAC functionality that cannot be meddled with. This is a perfect choice for government authorities and can be quite beneficial for building an efficient e-government structure for healthcare.

Blockchain-based excise stamp replacement system

The task

Using QR-codes as a replacement for excise stamps would allow decentralized and transparent control of the consumption goods logistics, delivery and selling from the supermarkets and shops. This would eliminate the possibility for fraud and selling counterfeit wares, as well as increase the tax revenues.

The solution

We have designed and developed a blockchain-based system that uses QR-codes instead of excise stamps. Any manufacturer of excise goods can order the required amount of QR-codes from the governmental authority. Each unit of the goods is then marked with the QR-code, a pack of cigarettes in our example:

  • The QR-code on a pack holds the information of the manufacturer, date of production, batch number, etc.
  • The QR-code on a block contains the hashes of all the packs within
  • The QR-code on a box contains the hashes of all the blocks within
  • The QR-code on a pallet contains the hashes of all the boxes within
  • The QR-code on a batch contains the hashes of all the pallets within

Blockchain_Based_System_ItSvit

When the cigarette manufacturer is going to produce a new batch of cigarettes, they order the required number of QR-codes from the governing body and mark the goods appropriately. When the batch is delivered to a wholesale distributor, the batch QR-code is scanned and the system is notified of the place and time of the event, so this data is stored to the blockchain. The same goes for small-scale distributors and all the way down to the supermarket storages. Thus said, when the customers purchase the pack of cigarettes in a shop, they can see that it is a genuine commodity, as they know the manufacturer, the production date and the batch number.

Blockchain_Based_System_ItSvit_2Once the pack is sold, its token is put into the “sold” state. In the future, if the commodity must be returned to the retailer (for a guarantee service of a household appliance, for example) this token can be used to verify the item genuity and returned to the manufacturer. Thus said, if some fraudulent vendors decide to forge the QR-codes, once a counterfeit unit is scanned, it will reveal that this QR-code is already used and the genuine commodity is stored or was sold elsewhere.

The system uses the blockchain API to access external data storages and store all the text details in the cloud, while only the transaction hashes are stored in the chains. This ensures the transactions are processed in milliseconds and require minimum disk storage space.

The result

Implementing this blockchain-based system would allow defeating the fraud and counterfeit goods selling, would allow the commodity manufacturers control the logistics and ensure the genuine products are delivered to the end users. This will ensure increasing the tax revenues and lowering the expenses on excise stamp production, as QR-codes are much cheaper to print.

Blockchain-based coupon and customer loyalty system

The task

One of IT Svit customers came up with an idea of unified coupon and customer loyalty system, working with online shops built on Shopify. This would allow uniting multiple shops into an integral network, providing tangible benefits for both the sellers and the customers.

The solution

We have built a web portal uniting the Shopify-based retailers and their customers through the Ethereum and Shopify APIs. The retailer orders a certain amount of coupons and can set their own rules for issuing them. They can also see where the coupons they issued were used, thus providing detailed analytics of the customer’s behavior.

The customer can apply their coupons anywhere through the network of shops, can send them to another customer and can review the date and terms of the coupon issuing to them.

The results

Our platform helps the retailers optimize their marketing expenses and provides behavioral analysis of the customer purchases. This helps inspire upselling and cross-selling to boost the revenues, not to mention the growth of sales due to discounts and loyalty coupons. The customers benefit by saving their money and being able to gift the coupons to their friends.

We plan to expand the platform to work with Magento and WooCommerce APIs and unite multiple online shops into a mutually-beneficial Blockchain-based network.

Optimization of the documentation workflow for the SPA

When the ship enters the port, a long and tedious process of issuing the required permissions begins. This can take half a year sometimes, because the documents must be signed by 18 approving authorities. This involves multiple postal deliveries of the documents, paying wages to innumerable personnel and opens wide possibilities for fraud. We wanted to improve the procedure by adding Blockchain into the mix.

The task

The Sea Ports Authority (SPA) ordered a Blockchain-based solution for automated document signing and approval, as a part of ongoing e-gov reform implementation. As most of the checks are row-based (if A equals A the document is signed, if not — a checking begins), smart contracts are able to deal with them without any trouble.

The solution

The system we designed is currently being tested in one of Ukrainian seaports. All the rules and dependencies of the approval process are codified with smart contracts that check the documents. If everything is OK, the document is approved within seconds, if something is wrong – a smart alert to an appropriate personnel is raised. The system works perfectly with the oracles (the international maritime registries, as well as the Lloyd’s marine insurance) and other trustworthy data sources required to process and approve the documents.

Blockchain_Case_Workflow_Optimization_ITSvit_6

The results

Launching our system helped the Sea Ports Authority cut the time of documents processing by 60-70%, thus leading to significant reduction of the payroll expenses. This also ensures excluding fraud and serves as a great cornerstone for efficient e-gov implementation.

AWS VPC peering in an AWS Organization

AWS CloudFormation allows automating VPC peering process as much as possible.

It handles VPC peering request creation and acceptance at the same time, so everything should work well as long as the connections are established between VPCs in the same AWS account, even if they are located in different AWS regions.

But things become a bit complicated when you try to do it between different AWS accounts. A VPC peering request should be issued in a VPC in the Requester AWS account and it should be accepted in the Accepter AWS account for establishing a connection. When doing so, the Accepter MUST create a Role on their side, which can then be assumed by the Requester in order to confirm the VPC peering request.

AWS_VPC_Peering_Solution_ItSvit_5

VPC peering configuration solution from IT Svit

In our case, it was unacceptable to let different AWS accounts manage each other’s resources. Thus, we couldn’t use the CloudFormation solution. So we created 2 Terraform manifests (accepter.tf and requester.tf) and there are two variants of using our solution:

  • Variant A: We have access to both AWS accounts and have all the needed permissions. We initiate VPC peering request in Requester AWS account (using requester.tf Terraform manifest) and confirm it in Accepter AWS account (using accepter.tf Terraform manifest). We’ve chosen Terraform because it allows managing the VPC peering request separately from VPC peering request confirmation.
  • Variant B: We have access to the Requester AWS account only. In this case, we create only a VPC peering request (using requester.tf Terraform manifest). The request will be in pending state as long as it takes the admin of Accepter AWS account to accept it (manually using AWS web console or using accepter.tf Terraform manifest).

Variant A

AWS_VPC_Peering_Solution_ItSvit_3

  1. Apply the requester.tf Terraform manifest in Requester AWS account
  2. Apply the accepter.tf Terraform manifest in Accepter AWS account

Variant B

AWS_VPC_Peering_Solution_ItSvit_4

  1. Apply the requester.tf Terraform manifest in Requester AWS account
  2. Ask the admin of the Accepter AWS account to accept the request

Final thoughts on VPC peering in an AWS organization

As a result, our solution makes establishing the VPC peering connection between any two AWS accounts very simple, as all the actions are combined in two Terraform manifests. Feel free to use our AWS VPC peering guide and if you need help or consultation with creating custom DevOps solutions — give us a nudge, we are always glad to help!

Terraform module for automated MongoDB backup

MongoDB is one of the most widely used databases out there, and creating backups for it is a crucial, yet routine task not to be taken lightly. This is why we decided to automate the process.

github

Manual backups are utterly outdated, not to mention this means keeping in mind all the peculiarities and tagging the copies by hand. Automated backup requires using certain libraries that are not present by default, and the DevOps team will most likely learn of this the hard way.

Thus said, we wanted to reach the following results while creating a Terraform module for automated MongoDB backup:

  • Automated management of the backup process (you only need access to your AWS account, installed Terraform and our solution)
  • Creating a structured and easily-accessible registry of backups (storing several latest versions of backups, ensuring they are available from multiple access points)


Screenshot 1: AWS CloudFormation interface with a list of periodic tasks

Working with the automated MongoDB backup tool from IT Svit

We currently have a Terraform manifest in place, that installs all the needed libraries and dependencies, enabling the automated backup to AWS S3 cloud storage, using the AWS CloudFormation and Data Pipeline tools. CloudFormation works as a scheduler, starting the Data Pipeline task that creates a MongoDB dump as a backup and stores it in the S3 bucket. The process logs can be accessed through Terraform or through the Data Pipeline web interface.

Screenshot 2: The list of backuping tasks in AWS Data Pipeline

Screenshot 3: Database backups stored within an AWS S3 bucket

Screenshot 4: Each backup is versioned, meaning a specific version of any file can be restored if need be.

All in all this is a neat little tool solving one of the major headaches of any DevOps team: automated database backuping, backup version monitoring and simple recovery upon request.

Ansible, AWS CLI and Kubectl in portable Docker OpsBox

github
This is the description of another one of neat little tools we made for our toolkit: Docker OpsBox, a portable runtime environment for error-proof Ansible+AWS CLI+Kubectl launch on any admin’s machine.

We believe Docker to become the mainstay of the corporate software development. Thus said,  we continue to make more and more solutions that utilize all the possibilities that Docker platform provides. For example, one day we decided to remove “works on my machine problem” once and for all, the case when operation yields different results, due to having other versions of certain components installed on the other machine, or lacking some components outright. The most obvious solution was an encapsulated user environment for infrastructure management.

The goals of developing the Ansible AWS CLI Kubectl container

We wanted to get the following results:

  • Provide an encapsulated user environment for working with the infrastructure
  • Exclude the “works well on my machine” situation
  • Launch the solutions like our AWS+Kubernetes container on any computer, for demonstration purposes, or to have a portable runtime environment

The Ansible AWS CLI Kubectl tool features


Currently, the container available on Github and Dockerhub provides commands for some basic features like launching, stopping, rebooting and deletion, as well as some advanced functionality, like inserting the current user’s credentials to the containerized services upon launch, to remove the need to enter the login and password manually. You can simply launch the container and use the AWS CLI and Kubectl tools you need at once, without any additional authentication. The tool also contains Docker commands for interactive Ansible input and colorful output.

We also have a Kube-AWS branch, yet it was not merged with the main repo, as the users might have a different version of Kube-AWS in place, and these versions are not reverse-compatible. Regardless of this, our container can be used as the user’s environment for infrastructure management on a machine without Kube-AWS installed. The other variant is using it for demonstration purposes on a clean machine.

Here is an example of what customers might face while installing the components manually (without using our solution):

# pip install awscli

# pip install ansible

Here is how the situation looks when using our Docker OpsBox:

Currently under development

There will be the following improvements:

  • We can pin every component to a release
  • We can tag the releases of the container following the Semver rules (the first version would be 0.1.0, the version with an updated Kubectl would be 0.2.0, etc.)

The results so far

We have created Docker OpsBox as one of the tools for using our AWS+Kubernetes solution. We are going to improve it and increase the range of the components included.

Thus said, feel free to contact us should you want a custom container developed for your needs, as we have ample experience with creating DevOps Solutions for our customers.

Docker Selenium Codeception Jenkins Container

github
This is a description and an installation/configuration manual for our Docker Selenium Codeception Jenkins container. Following it and using the code snippets below will help you deploy all the Codeception testing ecosystem 30 times faster, in 10 minutes instead of several hours. You won’t worry about versions compatibility, as well as have the flexibility of choosing between browsers. Google Chrome and Mozilla Firefox are included with corresponding web drivers.

Provisioning a Selenium/Codeception testing environment: 8 hours of hard work

We faced the same problem many DevOps engineers worldwide have to deal with: Codeception is a great solution for automated testing, yet provisioning the software ecosystem needed to work with it is a pain in the back. Keeping in mind all the needed packages and their correct versions, installing and configuring all the components was actually the monkey’s work that took up to 8 hours sometimes, if any issues occurred. They did occur quite often, so we decided to eliminate the problem once and for all. A Docker container with pre-installed Jenkins, Codeception, Selenium, Google Chrome, Mozilla Firefox and the appropriate WebDrivers was the obvious solution.

Capricorn_2_infographics-02

Automation and ease of deployment: a Docker Selenium Codeception Jenkins container

We want to receive the following improvements and benefits with this container:

  • A stable solution for rapid deployment of Codeception testing environments
  • Automation and ease of deployment due to error-proof step-by-step script
  • Ease or reproducibility to ensure the same results of the tests run by multiple testers, to avoid the “works on my machine” situation.

We had our share of experimenting and errors, yet the resulting container is able to solve all the issues mentioned above.

Preparing a Docker container: Dockerfile for installing the components

Below we will provide a step-by-step sequence of actions needed to create the Dockerfile and run the Selenium Codeception container easily, along with some explanations.

  1. Begin preparing the image with having only bare Jenkins installed onto it:
    FROM Jenkins
  2. Change the user to root in order to be able to work with Selenium without any issues later on:
    USER root
  3. Install X Virtual Framebuilder and PHP5 with the required modules:
    RUN apt-get update
    #install PHP xvfb
    RUN apt-get install -y php5 php5-curl php5-gd xvfb
  4. Install Selenium next:
    # Install Selenium
    RUN mkdir -p /opt/selenium
    RUN wget --no-verbose -O /opt/selenium/selenium-server-standalone-2.53.1.jar http://selenium-release.storage.googleapis.com/2.53/selenium-server-standalone-2.53.1.jar
    RUN chmod +x /opt/selenium/selenium-server-standalone-2.53.1.jar
  5. Install the Chrome WebDriver:
    # Install Chrome WebDriver
    RUN wget --no-verbose -O /tmp/chromedriver_linux64.zip http://chromedriver.storage.googleapis.com/2.28/chromedriver_linux64.zip
    RUN mkdir -p /opt/chromedriver-2.28
    RUN unzip /tmp/chromedriver_linux64.zip -d /opt/chromedriver-2.28
    RUN chmod +x /opt/chromedriver-2.28/chromedriver
    RUN rm /tmp/chromedriver_linux64.zip
    RUN ln -fs /opt/chromedriver-2.28/chromedriver /opt/selenium/chromedriver
  6. Install a clean Google Chrome:
    # Install Google Chrome
    RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
    RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list
    RUN apt-get -y update
    RUN apt-get -y install google-chrome-stable
  7. Disable the Google Sandbox in order to fix the compatibility of Google Chrome with Selenium. If sanbox is enabled, Selenium can’t start doing tests via Google Chrome. This is actually quite a dirty lifehack making your life a ton easier, as the issue just won’t be fixed by the Selenium team:
    # Google Chrome -- no sandbox
    COPY google-chrome /opt/google/chrome/
    RUN chmod +x /opt/google/chrome/google-chrome
  8. Install composer for downloading the PHP modules:
    # Install composer
    ENV COMPOSER_ALLOW_SUPERUSER=1
    RUN curl -sS https://getcomposer.org/installer | php -- \
    --filename=composer \
    --install-dir=/usr/local/bin
    RUN composer global require --optimize-autoloader \
    "hirak/prestissimo"
    RUN php -v
  9. Install Firefox:
    # Install the FireFox
    RUN touch /etc/apt/sources.list.d/debian-mozilla.list
    RUN echo "deb http://mozilla.debian.net/ jessie-backports firefox-release" > /etc/apt/sources.list.d/debian-mozilla.list
    RUN wget mozilla.debian.net/pkg-mozilla-archive-keyring_1.1_all.deb
    RUN dpkg -i pkg-mozilla-archive-keyring_1.1_all.deb
    RUN apt-get update
    RUN apt-get install -y firefox
  10. Install Codeception:
    # Install Codeception
    RUN touch /usr/local/bin/codecept
    RUN curl http://codeception.com/releases/2.2.8/codecept.phar -o /usr/local/bin/codecept
    RUN chmod +x /usr/local/bin/codecept
    #RUN php codecept.phar bootstrap
  11. Copy the start.sh script and issue the run command:
    # ADD start.sh
    COPY start.sh /usr/local/bin
    ENTRYPOINT ["/bin/bash"]
    CMD ["/usr/local/bin/start.sh"]

The image is ready now, so save it for further usage and let’s move on!

Start.sh script explanation

The start.sh script launches the previously built container, complete with Selenium and Google WebDriver.

  1. Launching the X Virtual Framebuilder
    xvfb-run
  2. Launching Selenium
    java -jar /opt/selenium/selenium-server-standalone-2.53.1.jar
  3. Launching the chromedriver to act as an API bridge between Selenium and Chrome:
    -Dwebdriver.chrome.driver=/opt/selenium/chromedriver
  4. Suppressing the stdout and stderr to remove the unneeded output:
    &>/dev/null
  5. Add & to launch the container in the background and avoid blocking the CLI:
    &
  6. Launching Jenkins:
    /bin/tini -- /usr/local/bin/jenkins.sh

The full script looks as follows and can be adjusted with appropriate browser driver should you so desire:

#!bin/bash

xvfb-run java -jar /opt/selenium/selenium-server-standalone-2.53.1.jar -Dwebdriver.chrome.driver=/opt/selenium/chromedriver &>/dev/null &
/bin/tini -- /usr/local/bin/jenkins.sh

The progress so far

This tool allows shortening the testing environment deployment time more than 30x times, to 10 minutes.

We are working to implement the following features:

  • Safari WebDriver support for Selenium
  • Supervisor with Process ID’s to use Jenkins for swift manipulation of multiple testing environments and separate outputs for Jenkins.log and Selenium.log
  • Having Jenkins container untouched and using Jenkins worker for it.

The repo also contains the stable combination of Chrome WebDriver, Selenium and Google Chrome packages for your convenience. Clone it, as you will surely want to poke around and play with the container to better suit it to your unique requirements! Good luck, and if you need any more assistance — don’t hesitate to drop us a line!

Monitoring

Overview

Setup monitoring with metrics visualization and alerting for kubernetes cluster

Solution

Prometheus is configured as a database for metrics collected from node-exporters and result is displayed via Grafana.
Prometheus is an open-source, very powerful and flexible monitoring system, which allows to monitor anything you ever need.
Prometheus for kubernetes contains the following:

  • prometheus-core
  • grafana-core
  • kube-state-metrics-deployment
  • prometheus-node-exporter (daemon set)
  • alertmanager
Technologies

Prometheus, Grafana, Alertmanager

HA MySQL containerized solution

Overview

We deliver highly available MySQL solution for ensuring data and avoid downtimes

Solution

Three instances of MySQL provide uninterruptable operations even if one instance is failed or unaccessible. Load balancer provides unified entry point to MySQL cluster (no need to change application configuration in case of MySQL hosts failures).
Container-based solution allows to deploy MySQL cluster to the dedicated hosts the same way as to kubernetes, Docker Swarm cluster etc.
At least three instances should be provided for MySQL but every odd number of instances (3 ,5 ,7…) can be configured as a cluster.

Technologies

MySQL, load balancer

Have a question?

Speak to an expert