The new Trinity V3.7 launched this month features Big Data support
With ever increasing web and mobile business services availability the Unstructured Data are piling up fast, not only for the web but for general business operations as well. Storage and analysis requirements have outpaced the capability of conventional database structure. Big Data is booming and hot.
"In practice, Big Data is more than data volume. It contains three key "V" factors:
- ●Volume – data volume in units of terabytes;
- ●Velocity – no matter whether in batch, instant or streaming mode the online ads require responses within 40 milliseconds while the credit system requires a credit rating response within one millisecond;
- ●Variety – data may come in structured, non-structured or semi-structured format… or any combination of these.
The new edition of Trinity, the best data management helper, comes with a Big Data interface and processing capacity to address the challenges faced by the customer with a range of data property, volume and processing performance requirements so as to make the best business value of Big Data," noted XIE Zhenze, Trinity Data's R&D Head.
Hadoop, the most mature Big Data solution technology available now is handicapped by a technology structure that is hard to adapt in application and maintenance. WU Chengyu, Trinity Data's Technology Director explains, "Hadoop's strength in parallel data processing comes at the expense of loading data into the Hadoop environment in advance. To exchange and process data in between conventional analysis database and the Hadoop structure is a very challenging task. With Trinity's unique component expansion structure, Trinity Data's Big Data Adaptor seamlessly integrates Hadoop's Big Data processing power with conventional ETL operations. Users may add Hadoop specific processing capacity to an existing ETL system without advanced Hadoop management technology. This ensures investment in existing systems while making the most of the processing power brought by Hadoop."
In addition to acting as a data exchange bridge crossing conventional ETL operation and Hadoop structure Trinity's Hadoop Data Adaptor module also:
- ●Simplifies HDFS data read-write operation and provides HDFS Reader/Writer components for direct reading-writing of HDFS files.
- ●Integrates HBase database with HBase Reader/Writer components for direct reading-writing of HBase files.
- ●Integrates Hive and Pig languages with Hive and Pig language runtime components.