Pentaho – Groups
Pentaho offers various functionalities and features to convert raw data into meaningful information. Here, we will learn how to use one such feature Groups. You can use this feature to segregate…
Pentaho offers various functionalities and features to convert raw data into meaningful information. Here, we will learn how to use one such feature Groups. You can use this feature to segregate…
Each page of a report contains two special areas. At the top of every page, you will find the page-header area. And at the bottom of the page, you will find the page-footer area.…
Most reporting elements can easily be added by dragging and dropping them from the Data pane to any of the bands on the workspace (mostly Details band). Let us continue…
We will learn to use the Pentaho Reporting Designer by taking an example. We will create a report on the employee database to produce a quick overview of every employee.…
In this we will provide a brief introduction on the user interfaces available in Pentaho and how to navigate through them. The Welcome Screen The welcome screen provides two ways…
Let us now learn how to install and configure Pentaho Reporting Designer. Prerequisites The Pentaho Reporting engine requires Java environment. Therefore, before installing Pentaho Reporting, make sure you have Java…
Pentaho Reporting is a suite (collection of tools) for creating relational and analytical reports. It can be used to transform data into meaningful information. Pentaho allows generating reports in HTML,…
TheĀ HCatLoaderĀ andĀ HCatStorerĀ APIs are used with Pig scripts to read and write data in HCatalog-managed tables. No HCatalog-specific setup is required for these interfaces. HCatloader HCatLoader is used with Pig scripts to…
The HCatInputFormat and HCatOutputFormat interfaces are used to read data from HDFS and after processing, write the resultant data into HDFS using MapReduce job. Let us elaborate the Input and Output format interfaces. HCatInputFormat…
HCatalog contains a data transfer API for parallel input and output without using MapReduce. This API uses a basic storage abstraction of tables and rows to read data from Hadoop…