Tuesday, August 18, 2015

Run a simple java app in Spark and Hdfs

1. Started Hadoop.
2. Created a maven project in IntelliJ.
<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>edu.berkeley</groupId>
    <artifactId>simple-project</artifactId>
    <name>Simple Project</name>
    <packaging>jar</packaging>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
        <dependency> <!-- Spark dependency -->            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>1.3.1</version>
        </dependency>
    </dependencies>

</project>

-----------------------------------

/* SimpleApp.java */import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.Function;

public class SimpleJava {
    public static void main(String[] args) {
        String logFile = "/user/XXXXXXXX/input/a.log"; // Should be some file on your system        SparkConf conf = new SparkConf().setAppName("Simple Application");
        JavaSparkContext sc = new JavaSparkContext(conf);
        JavaRDD<String> logData = sc.textFile(logFile).cache();

        long numAs = logData.filter(new Function<String, Boolean>() {
            public Boolean call(String s) { return s.contains("a"); }
        }).count();

        long numBs = logData.filter(new Function<String, Boolean>() {
            public Boolean call(String s) { return s.contains("b"); }
        }).count();

        System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);

        long numByField = logData.filter(new Function<String, Boolean>() {
    public Boolean call(String s) {
        String[] token = s.split(";");
        boolean existed = false;
        for (int i = 0; i < token.length; i++) {
            if (i == 7) {
                String timeInHdfs = token[i]; //2015-06-30 14:00:29.0                System.out.println(timeInHdfs);
                if (!timeInHdfs.equalsIgnoreCase("null") && timeInHdfs.compareTo("2015-06-29 23:59:59") > 0) {
                    existed = true;
                }
            }
        }
        return existed;
    }
}).count();
System.out.println("-----------------------------------------------------------------------");
System.out.println("Lines with bigger time: numByField: " + numByField);
    }
}

-----------------------------------
notes: when created artifact, select "link by META-INF" rather than build in.

Run:
$ ./bin/spark-submit --class "SimpleJava" --master local[4] ~/work/dev/bigdata/SimpleJava/out/artifacts/SimpleJava_jar/SimpleJava.jar
Check in web ui:
$ ./sbin/start-all.sh
visit http://localhost:8080

Friday, August 14, 2015

Setup Hue on Mac

reference: Hue Installation


package: hue-3.8.1

too many configuration......(tbd)

start:
build/env/bin/supervisor
and visit:
http://127.0.0.1:8888
root, 123456

Setup HBase and ZooKeeper on Mac

Reference: ZooKeeper installation,  HBase Installation

Based on previous article of Hadoop version,  use:
* zookeeper-3.4.6
* hbase-1.1.1

Install Zookeeper firstly, start it.
>bin/zkServer.sh start
Client can test connecting to it:
>bin/zkCli.sh -server 127.0.0.1:2181

Then install HBase.

[note] in HBase config, hdfs://localhost:8020 => hdfs://localhost:9000

Thursday, August 13, 2015

Use sqoop to import mysql into hadoop on Mac

reference guide.

1. prerequisites:
1.1 Hadoop, Sqoop version.
$ hadoop version
Hadoop 2.5.2

Sqoop:  sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz

download mysql-connector-java-5.1.36.jar and put in sqoop/lib folder :  http://mvnrepository.com/artifact/mysql/mysql-connector-java/5.1.36

1.2 edit hadoop/etc/hadoop/yarn.site.xml
<configuration>
  <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
    </property>

    <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

    <property>
       <name>yarn.application.classpath</name>
       <value>
           %HADOOP_HOME%/etc/hadoop,
           %HADOOP_HOME%/share/hadoop/common/*,
           %HADOOP_HOME%/share/hadoop/common/lib/*,
           %HADOOP_HOME%/share/hadoop/hdfs/*,
           %HADOOP_HOME%/share/hadoop/hdfs/lib/*,
           %HADOOP_HOME%/share/hadoop/mapreduce/*,
           %HADOOP_HOME%/share/hadoop/mapreduce/lib/*,
           %HADOOP_HOME%/share/hadoop/yarn/*,
           %HADOOP_HOME%/share/hadoop/yarn/lib/*
       </value>
    </property>

</configuration>

1.3 link missed executable binary:
sudo ln -s /usr/bin/java /bin/java
sudo ln -s /usr/local/bin/mysqldump /usr/bin/mysqldump

1.4 run import command under Sqoop folder:
./bin/sqoop import --connect jdbc:mysql://127.0.0.1:3306/sample --username root --table employers --direct -m 1 --target-dir /user/YOUR_NAME/employers

1.4.1 or use import config file:
./bin/sqoop --options-file /usr/local/sqoop/import.txt --table employers -m 1 --target-dir /user/YOUR_NAME/sample/employers --check-column id --incremental append 
import.txt:
----------------
import
--connect
jdbc:mysql://127.0.0.1/db
--username
root

1.4.2   --fields-terminated-by '|'
check "Output line formatting arguments" for more format usage

1.5 incremental import based on column value or modify date.

./bin/sqoop import --connect jdbc:mysql://127.0.0.1:3306/sample --username root --table employers --direct -m 1 --target-dir /user/YOUR_NAME/employers --check-column id --incremental append --last-value 118996

Monday, August 3, 2015

Loading MySQL data into Spark

reference: http://www.sparkexpert.com/2015/03/28/loading-database-data-into-spark-using-data-sources-api/

1. Download spark-1.3.1-hadoop-2.6.
2. Download source code: https://github.com/sujee81/SparkApps
3. Import "spark-load-from-db" project.
4. Modified pom.xml.
   <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.sparkexpert</groupId>
    <artifactId>spark-load-from-db</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <spark.version>1.3.1</spark.version>
        <mysql.version>5.1.25</mysql.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>

        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>${mysql.version}</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.2</version>
                <configuration>
                    <source>1.7</source>
                    <target>1.7</target>
                    <compilerArgument>-Xlint:all</compilerArgument>
                    <showWarnings>true</showWarnings>
                    <showDeprecation>true</showDeprecation>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

5. Build and Run.

Setup Spark with Hadoop on Mac

1. Go to download page: http://spark.apache.org/downloads.html and selected spark-1.3.1-bin-hadoop2.6.tgz
2. After download, clicked binary to unzip.
3. link binary to specific path. >ln -s DOWNLOAD/spark-1.3.1-bin-hadoop2.6 /usr/local/spark
4. start Hadoop. > start-dfs.sh
5. start Spark. > ./bin/spark-shell
6. visit Spark web ui. http://localhost:4040/jobs/