The OEM wanted to monitor the performance of there Engine test bed from remote location
Python, Postgresql Time Series DB, Metabase, Graphana
STY has a state of the art Manufacturing Intelligence solution. With this solution, companies can acquire data from the Asset PLCs, transfer it to cloud using the Edge device, perform Asset Condition Monitoring, generate OEE Dashboards, track Asset Maintenance History and do predictive maintenance.
The OEM’s core requirement was remote monitoring of Engine Test bed. They asked STY to do a proof-of-concept to demonstrate the capabilities of the Manufacturing Intelligence solution. STY successfully delivered the Proof of Concept and exceeded the customer expectations.
The customer’s customer ( A Leading Oil Refinery in Saudi Arabia ), required Operational dashboards to track certain KPIs of there multiple plants across the country, near real time.
Power BI, Power BI Report Server, Gateway, SQL Server Stored Procedures
The Customer wanted to track KPIs for Process Area, Tank Farms, Energy, Environment, Heat, Marine , YRD Corrosion and Feed Products. A total of 12 main dashboards and 8 drill through reports were developed.
The data source was a data generated by machines and sensors. This data was already in a structured form. The customer insisted to use SQL Server Stored procedure to pull the required data. Parameterize stored procedures were developed and used as a data source for Power BI. The dashboard layout were complex, as the customer wanted to see the data in certain manner. Some of the features asked were not directly supported by Power BI. We had to do lot of research and develop a customer solution to address the business problem.
Another challenge was that the customer had on-premise installation. Power BI deployment on cloud is much easier as compared to on-premise deployment. Moreover, the dashboards were required to be rendered through the customer’s intranet portal which also threw some on-time integration challenges, which we successfully addressed and the dashboards were deployed successfully.
The customer has data from IOT devices stored in Google cloud in Big Table . The customer decided to migrate from Google cloud to AWS cloud on ElasticSearch. The need was to migrate 3TB data to ElasticSearch.
ElasticSearch, Logstash, Kibana, Python
The data from GCP Big Table was migrated to AWS S3 bucket as Json files. The data pipelines are set up in Logstash. Implemented data cleansing and data migration validation using Python scripts. Initially customer team tried implementing this pipeline, but they faced issues with the performance. Implemented parallel processing capability in Log stash to improve data load time to 1 GB per minute on AWS t2.medium instance.
The Client have some 43+ plus voice accounts of various customers. There are some mission critical KPIs which the leadership team needs on day to day basis, however due to current system architecture, the leadership team does not get the KPI’s on time, which results in delivery issues & penalties from their customers.
Data Architecture & Overall system assessment
This project is still work in progress. The project duration is of 9 weeks. The various phases include Initiation, Discovery, Analysis & Recommendation. During the discovery phase, the key customer SME were interviewed along with the IT & other supporting functions to gather the pain areas and process gaps. In the analysis phases, the responses were studied and some conclusions were drawn. The recommendation phase is in progress and we are building a solution architecture and the implementation roadmap.
The client is building a platform for loan monitoring where credit history data is integrated with core banking data having loan account details. The loan monitoring application will monitor income repayment trend based on current income and predict default probability. During the initial phase, very less data was available for building income prediction model & default probability model. Hence synthetic data was generated to build initial models. As more actual data became available, the quality of synthetic data also improved
We are getting data from two sources . We create database using dimensional data model for source data. We build income estimation model and default probability model. Using this models one can predict if a person will default on the loan based on personal information provided and the response of the borrower’s income estimation model.
The model is intended to be used as a reference tool for the client and his financial institution to help make decisions on issuing loans, so that the risk can be lowered, and the profit can be maximized. Initially synthetic data was generated manually by using some domain knowledge and then by using gretal.ai we create more synthetic data to build this model.
Currently Hospital’s find it difficult to identify which products are in contract and which are not. This results in higher cost for either Hospitals or Distributors. The start up based North America is aiming to empower Distributors and Hospital’s with the power of integrated and cleansed data of Products & notify if any change is made in product specs
Python, Spark, BERT, MongoDB, Neo4J
The solution ingests data from manufacturer product catalogue, Distributor product catalogue and Hospital Product information system. This information is integrated in Neo4 databased. The data processing jobs are developed using PySpark. AI based robust matching algorithm is developed for matching product based on product description to identity product in same product category and also give information about products which are closer match to product under contract
Our client had developed a wearable device, which can be strapped around shoulder. The client wanted to check how patients are doing prescribed exercises on daily basis. Our role was to build a hybrid mobile app, which will work on both iOS and Android platform
React Native, Python
The client wanted to build a solution, which will work on both android and iOS platform. Hence we decided to use React Native platform. The backend API’s were developed in Python. The connectivity between the phone and wearable device was established using Bluetooth technology. We have developed interactive charts to understand the exercise pattern. The alerts, reminders were added to help patient to stick to the exercise routine