All your Extraction, Transformation and Load needs
RAVN Pipeline delivers a platform approach to all your Extraction, Transformation and Load (ETL) needs. Whether you’re dynamically ingesting content into a search engine, connecting and ingesting sources to Microsoft SharePoint, or perhaps just performing a data migrating exercise after a merger or acquisition, RAVN Pipeline provides the assurances both the IT department and the business demand, that can be successfully and reliably completed.
REQUEST A DEMO OF THIS PRODUCT
See this product in action
The fully flexible staged approach to ingestion that RAVN Pipeline provides, allows businesses to perform complex extraction, transformation and enhancement of content as it is ingested. Aside from the obvious benefits of having content rapidly and robustly ingested and made available to your business, the ability to enhance content through the application of business rules or fuzzy logic (especially when combined with RAVN Core and RAVN Linking engines) could provide the additional insight that makes all the difference.
The repeatable and auditable nature of the RAVN Pipeline ingestion chain provides confidence that content that should be part of a data corpus has actually been ingested correctly. Or if it hasn’t, where and why it has failed. Too often, legacy approaches to data connectivity provide little or no audit trail when ingestion fails, or even statistics when it is successful. With RAVN Pipeline, you can be sure of the status of all connectivity and ingestion jobs, allowing you to take appropriate remedial action, or simply to be able to report reliably on success.
Eliminate Dependence on Third Party Vendors
The cost and potential delays of such dependencies can be prohibitive and limit your ability as an organisation to respond to inevitable changes. By adopting our user driven approach, your IT function can quickly and easily affect changes to existing connections (perhaps reflecting changes in the structure of those content sources at upgrade time), or add additional sources via the graphical user interface (GUI), without recourse to third party vendors. By doing so, the efficacy of potentially mission critical applications is not compromised. In support of efficiency, pre-defined ingestion configurations can be deployed or subtly re-purposed with minimal effort, to save time.
Crawling and Fetching
Combining Data Sources
Targeted Website Crawling
Web scraping is a useful technique to convert the unstructured information available on the web into a structured version. Using web scraping it is for instance possible to crawl and retrieve property postings from various sources and build a powerful property search engine. Using the built-in web filters, Web scraping is made very easy to configure and use. With the web based interface, scraping rules can be configured and tested. Specific parts of the crawled web pages can easily be loaded into a search index or a SQL database.
Controlled and configured through a graphical web user interface, removing the need to have specialist knowledge of local operating systems, text editors or data source specific configuration editors. By moving the configuration responsibility from a developer role to an administrator or even business user, we allow you to seriously reduce the time it takes to make changes in your ETL processes. On the other hand, aside from the series of standard plug and play stages provided, a web based scripting stage allows you to go beyond the functionality that is provided out of the box.
Intelligent Data Processing
The data extraction process is controlled via various data tracking mechanisms that ensure incremental indexing, where only new or altered data is processed.
The architecture allows many instances to be distributed for scaling and load balancing purposes, in support of Big Data type environments or applications where the volumes of content and the speed of processing is important.
Job Based Configuration
Once a Pipeline job has been created it is trivial to reuse it for a different data source, drastically reducing implementation times.
Extend Pipeline Using its API
Using the Pipeline REST API, it is possible to control, configure and monitor Pipeline and the different jobs configured. It is also to build custom applications which push data through the Pipeline. The data transformation and load possibility become virtually unlimited using the data push API.
Need more information?
Read our Pipeline Brochure
Latest Case Study
Read our latest customer experience showing how our products have benefited their organisation
- RAVN's platform has allowed us to implement our vision of providing integrated, secure and effective access to our knowledge resources.Robin Hall, Head of Knowledge Management at Howard Kennedy
- RAVN is one of a number of innovative solutions that the business is implementing as part of IT's 2020 vision aligned to improve efficiencies and productivity benefits across the firmClive Knott, IT Director at Howard Kennedy
- Our primary goal here is to leverage knowledge to accelerate business and deal decision-making and to harness the collaborative power of a multi-geography growth markets organisation. RAVN Connect is an important component in a differentiation we bring to our investors around how we share knowledge and get to more dynamic decisions on deal flow. Additionally, it helps us harness macro-economic, sectoral and functional knowledge flows seamlessly and is ultimately a major competitive advantage for us.Ovais Naqvi, Managing Director – Head of Market Engagement at Abraaj
- It's likely to save us tens of millions of pounds in lost revenue.Horia Selegean, Head of Revenue & Margin Assurance at BT
- Sky are committed to using the latest innovations to ensure our customers receive the best possible viewing experience. RAVN’s AI component has allowed us to successfully automate the EPG review process which has dramatically reduced the review time as well as ensuring we maintain robust and compliant information">Angus Gairdner, Head of Content Planning Operations at Sky