A tutorial for people who want to get started with Hadoop but have trouble writing in Java.
Since Hadoop is written in Java, Mapper / Reducer is also basically written in Java, but Hadoop has a function called Hadoop Streaming, which allows data to be exchanged via Unix standard input / output. .. I wrote Mapper / Reducer in Python using this. Of course, if you use Hadoop Streaming, you can write in languages other than Python.
This time, I built a pseudo-distributed environment on Ubuntu.
Ubuntu12.04 + Haadoop2.4.1
Install if you don't have Java
$ sudo apt-get update
$ sudo apt-get install openjdk-7-jdk
Download Hadoop
$ wget http://mirror.nexcess.net/apache/hadoop/common/hadoop-2.4.1/hadoop-2.4.1.tar.gz
$ tar zxvf hadoop-2.4.1.tar.gz
$ mv hadoop-2.4.1.tar.gz hadoop
$ rm hadoop-2.4.1.tar.gz
$ sudo mv hadoop /usr/local
$ cd /usr/local/hadoop
$ export PATH=$PATH:/usr/local/hadoop/bin #.It is good to write it in zshrc
Edit the following 4 files
$ vim etc/hadoop/core-site.xml
core-site.xml
...
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
$ vim etc/hadoop/hdfs-site.xml
hdfs-site.xml
...
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
$ mv etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
$ vim etc/hadoop/mapred-site.xml
mapred-site.xml
...
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
$ vim etc/hadoop/hadoop-env.xml
hadoop-env.xml
...
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
...
If you don't have the key, add it
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Finally initialize namenode and start Hadoop
$ hdfs namenode -format
$ sbin/start-dfs.sh
This time, write Hadoop sample code WordCount in Python.
First, prepare the input file
$ mkdir inputs
$ echo "a b b c c c" > inputs/input.txt
Mapper
$ vim mapper.py
mapper.py
#!/usr/bin/env python
import sys
for l in sys.stdin:
for word in l.strip().split(): print '{0}\t1'.format(word)
Mapper outputs something like the following
a 1
b 1
b 1
c 1
c 1
c 1
Reducer
$ vim reducer.py
reducer.py
#!/usr/bin/env python
from collections import defaultdict
from operator import itemgetter
import sys
wordcount_dict = defaultdict(int)
for l in sys.stdin:
word, count = line.strip().split('\t')
wordcount_dict[word] += int(count)
for word, count in sorted(wordcount_dict.items(), key=itemgetter(0)):
print '{0}\t{1}'.format(word, count)
Reducer counts each word output by Mapper and outputs something like the following
a 1
b 2
c 3
Finally run the above Mapper / Reducer on Hadoop
First. Download the jar file for Hadoop Streaming
$ wget http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-streaming/2.4.1/hadoop-streaming-2.4.1.jar
Create a directory on HDFS and put the input file on it (Be careful not to mess up local files with HDFS files)
$ hdfs dfs -mkdir /user
$ hdfs dfs -mkdir /user/vagrant
$ hdfs dfs -put inputs/input.txt /user/vagrant
When executed, the result is stored in the specified output directory.
$ hadoop jar hadoop-streaming-2.4.1.jar -mapper mapper.py -reducer reducer.py -input /user/vagrant/input.txt -output outputs
$ hdfs dfs -cat /user/vagrant/outputs/part-00000
a 1
b 2
c 3
Recommended Posts