11,95 €
11,95 €
inkl. MwSt.
Sofort per Download lieferbar
payback
6 °P sammeln
11,95 €
11,95 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
6 °P sammeln
Als Download kaufen
11,95 €
inkl. MwSt.
Sofort per Download lieferbar
payback
6 °P sammeln
Jetzt verschenken
11,95 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
6 °P sammeln
  • Format: PDF

Integrating data from multiple sources is essential in the age of big data, but it can be a challenging and time-consuming task. This handy cookbook provides dozens of ready-to-use recipes for using Apache Sqoop, the command-line interface application that optimizes data transfers between relational databases and Hadoop.Sqoop is both powerful and bewildering, but with this cookbooks problem-solution-discussion format, youll quickly learn how to deploy and then apply Sqoop in your environment. The authors provide MySQL, Oracle, and PostgreSQL database examples on GitHub that you can easily…mehr

  • Geräte: PC
  • mit Kopierschutz
  • eBook Hilfe
  • Größe: 8.52MB
  • FamilySharing(5)
Produktbeschreibung
Integrating data from multiple sources is essential in the age of big data, but it can be a challenging and time-consuming task. This handy cookbook provides dozens of ready-to-use recipes for using Apache Sqoop, the command-line interface application that optimizes data transfers between relational databases and Hadoop.Sqoop is both powerful and bewildering, but with this cookbooks problem-solution-discussion format, youll quickly learn how to deploy and then apply Sqoop in your environment. The authors provide MySQL, Oracle, and PostgreSQL database examples on GitHub that you can easily adapt for SQL Server, Netezza, Teradata, or other relational systems.Transfer data from a single database table into your Hadoop ecosystemKeep table data and Hadoop in sync by importing data incrementallyImport data from more than one database tableCustomize transferred data by calling various database functionsExport generated, processed, or backed-up data from Hadoop to your databaseRun Sqoop within Oozie, Hadoops specialized workflow schedulerLoad data into Hadoops data warehouse (Hive) or database (HBase)Handle installation, connection, and syntax issues common to specific database vendors

Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.

Autorenporträt
Kathleen Ting is currently a Customer Operations Engineering Manager at Cloudera where she helps customers deploy and use the Hadoop ecosystem in production. She has spoken on Hadoop, ZooKeeper, and Sqoop at many Big Data conferences including Hadoop World, ApacheCon, and OSCON. She's contributed to several projects in the open source community and is a Committer and PMC Member on Sqoop. Jarek Jarcec Cecho is currently a Software Engineer at Cloudera where he develops software to help customers better access and integrate with the Hadoop ecosystem. He has led the Sqoop community in the architecture of the next generation of Sqoop, known as Sqoop 2. He's contributed to several projects in the open source community and is a Committer and PMC Member on Sqoop, Flume, and MRUnit.