Apache Sqoop Cookbook: Unlocking Hadoop for Your Relational Database / Kathleen Ting, Jarek Jarcec Cecho

Book Cover Apache Sqoop Cookbook: Unlocking Hadoop for Your Relational Database
List Price: $14.99
Our Price: $13.90
Lowest Price

For Bulk orders
Quantity

Used Book Price: $8.38
/ Kathleen Ting, Jarek Jarcec Cecho
Publisher: O'Reilly Media
Availability:Usually ships in 24 hours
Sales Rank: 1156416
ISBN-10: 1449364624
ISBN-13: 9781449364625


Q&A with Kathleen Ting and Jarek Jarcec Cecho, author of "Apache Sqoop Cookbook"

Q. What makes this book important right now?

A. Hadoop has quickly become the standard for processing and analyzing Big Data. In order to integrate a new Hadoop deployment into your existing environment, you will need to transfer data stored in relational databases into Hadoop. Sqoop optimizes data transfers between Hadoop and databases with a command line interface listing 60 parameters. In this book, we'll focus on applying the parameters in common use cases to help you deploy and use Sqoop in your environment.

Q. What do you hope that readers of your book will walk away with?

A. One recipe at a time, this book guides you from basic commands not requiring prior Sqoop knowledge all the way to very advanced use cases. These recipes are detailed enough not only to enable you to deploy them within your environment but also to understand Sqoop's inner workings.

Q. Can you give us a little taste of the contents?

A. Imagine a scenario where you are incrementally importing records from MySQL into Hadoop. When you resume importing and noticing that some records have been modified, you also want to include those updated records. How do you drop the older copies of records when records have been updated and then merge in the newer copies?

This sounds like a use-case for using the lastmodified incremental mode. Internally, the lastmodified import consists of two standalone MapReduce jobs. The first job will import the delta of changed data similarly to the way normal import does. This import job will save data in a temporary directory on HDFS. The second job will take both the old and new data and will merge them together into the final output, preserving only the last updated value for each row.

Here's an example:

sqoop import \

--connect jdbc:mysql://mysql.example.com/sqoop \

--username sqoop \

--password sqoop \

--table visits \

--incremental lastmodified \

--check-column last_update_date \

--last-value "2013-05-22 01:01:01"


Now you can buy Books online in USA,UK, India and more than 100 countries.
*Terms and Conditions apply
Disclaimer: All product data on this page belongs to buy amazon.
No guarantees are made as to accuracy of prices and information.

Contact Us

Create a Bookshelf of your Favorite books
Get it on Google Play        Get it on Google Play
For Any Queries please don't hesitate to contact us at
USA +1(760)3380762
+1(650) 9808080
India +91 9023011224
India +91 9023011224 (Whatsapp)
Donate
Buy Books online because as an Amazon Associate we earn from qualifying purchases.