Solutions & Accelerators

HSC has developed several Solutions (ready to run/white-label) & Accelerators (stable software components that can be integrated into customer products) in different domains and verticals that help our customers reduce time to market

To read more about our solutions, please click on the "Solutions" top menu and navigate to the solution you'd like to read more about.

HSC offers the following Accelerators today:

Big Data Pipeline for Analytics


HSC's Big Data Pipeline for analytics is a high speed distributed architecture that is ideal to use as a core enabler for any system that requires high speed processing and real-time analytics of millions of transactions without data loss. In-fact, some of HSC's solutions, such as OTT Video Delivery, QoE and WiFi Analytics all leverage HSC's Big Data Pipeline.

Background on Data Processing:

Data processing primarily involves four stages:

Stages of Data Processing

  1. Extract Transform and Load (ETL): It should be possible to ingest data from existing data source(s). A data source may either push data or data may be pulled for processing. Incoming data may be cleansed or transformed before processing.
  2. Data Processing: Incoming data shall be processed as per the business objectives to generate information.
  3. Data Storage: Generated information may be stored using a persistent/volatile storage. Other systems may pull the information from the storage.
  4. Data Visualization: Generated information may be displayed on some dashboards.

Challenges in Data Processing:

Each data processing stage has its own intricacies and there are various options/tools available. The challenge lies in identifying the right tool for any stage and then integrating the chosen tools to come up with a data processing framework. The framework must address the following high-level requirements:

  1. It must be possible to integrate a new data source.
  2. Framework must allow dynamically scaling the processing infrastructure as per the incoming data rate and honor the associated SLAs.
  3. The generated information may be stored at various possible places like database, local/network file system or may be passed on to some other system.
  4. Framework must have minimum latency and high throughput.
  5. The framework must monitor the various processing components and handle the failures gracefully.

HSC's Big Data Analytics Framework:

HSC Big Data Analytics Framework

Key Features of the Framework:

  • The framework uses Kafka messaging for storing incoming data. Kafka is used for sharing data among various data processing tasks. Kafka is distributed, low latency, high throughput messaging system with support for some unique consumption patterns which makes it ideal for storing and processing large quantities of data
  • Spark is used as the data processing framework. Spark scores over Hadoop both in terms of performance and ease of usage. Spark comes with an excellent support system and numerous out of box integrations. Moreover, Spark allows the use of same code for both online and offline processing. Moreover, Spark allows to dynamically add/remove processing infrastructure without impacting the on-going tasks
  • All the incoming data may be stored in HDFS as parquet files. This allows to reprocess the historical data using new business rules
  • Redis is used as the distributed caching framework for storing frequent used application data
  • Processed data may be stored in a medium of choice by writing a custom DAO layer
  • All the components of the system and the KPIs (like maximum incoming data rate) are monitored using Prometheus. It is possible to define monitoring rules and the associated actions
  • The framework can be hosted locally or in cloud
  • Docker container and stack services have been used to expedite deployment and streamline monitoring
  • Any possible datastore can be used as all the interaction happens via a data abstraction layer
  • Highly customizable and tunable for the specific use case in hand

Potential Use Cases of HSC's Big Data Analytics Framework:

Organizations all over the world are looking at having a centralized data processing framework. All the enterprise data must go into this pipeline and then get processed as per the pre-configured rules. Data access (raw and processed) must be controlled all the time. This pattern enables greater coordination among different teams/departments within the same enterprise. It also allows the enterprises to share the data processing infrastructure among the teams which brings down the overall cost.

HSC’s data processing framework is highly suitable for such large-scale enterprise data management needs. The framework facilitates ingestion data and processing it in real time. Batch jobs can be executed to generate time consuming non-real time reports. Historical data can be archived on HDFS/S3 and retrieved as and when needed. The framework allows a great coordination between different data processing jobs using Kafka.

Hughes Systique has used this framework for various customer projects and solutions. Two of HSC's accelerators viz. “OTT Video Delivery QOE” and “WiFi Analytics Solution” are leveraging this framework. Both of these solutions have different kinds of data processing needs but since the framework was flexible, it could be easily adapted for both of them.