Monday, September 28, 2015

Install intl-* extension for PHP in Mac

1.  Dependencies.
install ICU4C.
tar xzvf icu4c-4_4_2-src.tgz
cd icu/source
chmod +x runConfigureICU configure install-sh
./runConfigureICU MacOSX
make && make install

install Autoconf.
brew install autoconf

install PECL.

2. install intl-*.
pecl install intl.

3. update php.ini.
extension=/usr/lib/php/extensions/no-debug-non-zts-20121212/intl.so




Tuesday, September 22, 2015

加拿大旅游签证

http://www.16safety.ca/page/%E5%8A%A0%E6%8B%BF%E5%A4%A7%E7%AD%BE%E8%AF%81%E7%BD%91%E4%B8%8A%E7%94%B3%E8%AF%B7%E8%BF%87%E7%A8%8B%E4%BB%8B%E7%BB%8D%E5%8F%8A%E6%B3%A8%E6%84%8F%E4%BA%8B%E9%A1%B9-%EF%BC%88%E5%B7%B2%E6%9B%B4%E6%96%B0%EF%BC%89

需提供的材料:
1. IMM5257表。(填写完直接上传)
2. 结婚证。(先翻译,再扫描原件和翻译件)
3. 家庭信息表。(填写完打印签字,再扫描)
4. 旅游信息。(机票,提供加国的行程安排等)
5. 旅行目的。(申请人写给大使馆的保证信或者婚礼邀请函等)
6. 护照
7. 教育及工作信息表。(打印,人工填写,签字。扫描)
8. 邀请信。提到cost coverage.
9. IMM5713代理人表?????
10. 收入证明(中英文,签字盖章扫描)
11. 申请人财产状况。(房产证,存款证明,银行流水6个月)
12. 电子照片。
13. Schedule 1(5257表的附表)
14. 申请人公司的准假证明。(中英文)

Friday, September 11, 2015

Performance test on Single Spark + HDFS + Sqoop VS MySQL

Mac OS X Version 10.9.4,   Processor 2.6G Intel Core i5,   Memory 16G 1600MHz DDR3

* using group+count in mysql and spark: 
select id, count(1) as cnt from contact_alerts group by id;

rows    size(M)   create(php)   import(sqoop)   spark(s) mysql(s)
100k     13               15s                             NA                6            0.003
1m        104             222s                           26s                15               5
10m      1016           31m                          1m28s             ?                 39

transform txt file of HDFS to DataFrame in Spark, and join multi DataFrames

reference

Without HIVE, Spark will read multi txt files from HDFS and transform them to DataFrame, which is to analyze conveniently.

pom.xml
<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>edu.berkeley</groupId>
    <artifactId>simple-project</artifactId>
    <name>Simple Project</name>
    <packaging>jar</packaging>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
        <dependency> <!-- Spark dependency -->            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>1.3.1</version>
        </dependency>

        <dependency> <!-- Spark dependency -->            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.10</artifactId>
            <version>1.4.1</version>
        </dependency>
    </dependencies>

</project>

----------------------------------------------
Alert.java
import scala.Serializable;

public class Alert implements Serializable {
    private String id;
    private String alert;
    private String created;


    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    public String getAlert() {
        return alert;
    }

    public void setAlert(String alert) {
        this.alert = alert;
    }

    public String getCreated() {
        return created;
    }

    public void setCreated(String created) {
        this.created = created;
    }
}
--------------------------------------------------
AlertMore.java
import scala.Serializable;

public class AlertMore implements Serializable {
    private String id;
    private String contactId;

    public String getContactId() {
        return contactId;
    }

    public void setContactId(String contactId) {
        this.contactId = contactId;
    }

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }
}
----------------------------------------------------
SimpleApp.java
/* SimpleApp.java */import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.sql.*;
import org.apache.spark.api.java.function.Function;


public class SimpleJava {
    public static void main(String[] args) {
        String logFile = "/user/XXX/sample/contact_alerts"; // Should be some file on your system        SparkConf conf = new SparkConf().setAppName("Simple Application");
        JavaSparkContext sc = new JavaSparkContext(conf);

        JavaRDD<String> logData = sc.textFile(logFile).cache();

        SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
JavaRDD<Alert> alerts = logData.map(new Function<String, Alert>() {
    public Alert call(String line) throws Exception {
        Alert alert = new Alert();
        alert.setId(null);
        alert.setAlert(null);
        alert.setCreated(null);

        String[] tokens = line.split(",");
        for (int i = 0; i < tokens.length; i++) {
            if (i == 0) alert.setId(tokens[i]);
            if (i == 3) alert.setAlert(tokens[i]);
            if (i == 7) alert.setCreated(tokens[i]);
        }

        return alert;
    }
});
DataFrame alertDF = sqlContext.createDataFrame(alerts, Alert.class);
alertDF.registerTempTable("alerts");

JavaRDD<AlertMore> alertsMore = logData.map(new Function<String, AlertMore>() {
    public AlertMore call(String line) throws Exception {
        AlertMore alertMore = new AlertMore();
        alertMore.setId(null);
        alertMore.setContactId(null);

        String[] tokens = line.split(",");
        for (int i = 0; i < tokens.length; i++) {
            if (i == 0) alertMore.setId(tokens[i]);
            if (i == 1) alertMore.setContactId(tokens[i]);
        }

        return alertMore;
    }
});
DataFrame alertMoreDF = sqlContext.createDataFrame(alertsMore, AlertMore.class);
alertMoreDF.registerTempTable("alerts_more");

System.out.println("-----------------------------------------------------------------------");
System.out.println("DataFrame - query from alerts");
DataFrame totalAlerts = sqlContext.sql("SELECT * FROM alerts").join(alertMoreDF, alertDF.col("id").equalTo(alertMoreDF.col("id")));
totalAlerts.show();
System.out.println(alertDF.filter(alertDF.col("id").gt(911111)).count());

/*  DataFrame from JsonDataFrame dfFromJson = sqlContext.jsonFile("/user/XXXXX/people.json");dfFromJson.show();dfFromJson.select("name").show();dfFromJson.select(dfFromJson.col("name"), dfFromJson.col("age").plus(1)).show();dfFromJson.filter(dfFromJson.col("age").gt(21)).show();dfFromJson.groupBy("age").count().show();*/
} }


Run:
$ ./bin/spark-submit --class "SimpleJava" --master local[4] ~/work/dev/bigdata/SimpleJava/out/artifacts/SimpleJava_jar/SimpleJava.jar
if java.lang.OutOfMemoryError: GC overhead limit exceeded, added -Dspark.executor.memory=6g

Tuesday, August 18, 2015

Run a simple java app in Spark and Hdfs

1. Started Hadoop.
2. Created a maven project in IntelliJ.
<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>edu.berkeley</groupId>
    <artifactId>simple-project</artifactId>
    <name>Simple Project</name>
    <packaging>jar</packaging>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
        <dependency> <!-- Spark dependency -->            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>1.3.1</version>
        </dependency>
    </dependencies>

</project>

-----------------------------------

/* SimpleApp.java */import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.Function;

public class SimpleJava {
    public static void main(String[] args) {
        String logFile = "/user/XXXXXXXX/input/a.log"; // Should be some file on your system        SparkConf conf = new SparkConf().setAppName("Simple Application");
        JavaSparkContext sc = new JavaSparkContext(conf);
        JavaRDD<String> logData = sc.textFile(logFile).cache();

        long numAs = logData.filter(new Function<String, Boolean>() {
            public Boolean call(String s) { return s.contains("a"); }
        }).count();

        long numBs = logData.filter(new Function<String, Boolean>() {
            public Boolean call(String s) { return s.contains("b"); }
        }).count();

        System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);

        long numByField = logData.filter(new Function<String, Boolean>() {
    public Boolean call(String s) {
        String[] token = s.split(";");
        boolean existed = false;
        for (int i = 0; i < token.length; i++) {
            if (i == 7) {
                String timeInHdfs = token[i]; //2015-06-30 14:00:29.0                System.out.println(timeInHdfs);
                if (!timeInHdfs.equalsIgnoreCase("null") && timeInHdfs.compareTo("2015-06-29 23:59:59") > 0) {
                    existed = true;
                }
            }
        }
        return existed;
    }
}).count();
System.out.println("-----------------------------------------------------------------------");
System.out.println("Lines with bigger time: numByField: " + numByField);
    }
}

-----------------------------------
notes: when created artifact, select "link by META-INF" rather than build in.

Run:
$ ./bin/spark-submit --class "SimpleJava" --master local[4] ~/work/dev/bigdata/SimpleJava/out/artifacts/SimpleJava_jar/SimpleJava.jar
Check in web ui:
$ ./sbin/start-all.sh
visit http://localhost:8080

Friday, August 14, 2015

Setup Hue on Mac

reference: Hue Installation


package: hue-3.8.1

too many configuration......(tbd)

start:
build/env/bin/supervisor
and visit:
http://127.0.0.1:8888
root, 123456

Setup HBase and ZooKeeper on Mac

Reference: ZooKeeper installation,  HBase Installation

Based on previous article of Hadoop version,  use:
* zookeeper-3.4.6
* hbase-1.1.1

Install Zookeeper firstly, start it.
>bin/zkServer.sh start
Client can test connecting to it:
>bin/zkCli.sh -server 127.0.0.1:2181

Then install HBase.

[note] in HBase config, hdfs://localhost:8020 => hdfs://localhost:9000

Thursday, August 13, 2015

Use sqoop to import mysql into hadoop on Mac

reference guide.

1. prerequisites:
1.1 Hadoop, Sqoop version.
$ hadoop version
Hadoop 2.5.2

Sqoop:  sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz

download mysql-connector-java-5.1.36.jar and put in sqoop/lib folder :  http://mvnrepository.com/artifact/mysql/mysql-connector-java/5.1.36

1.2 edit hadoop/etc/hadoop/yarn.site.xml
<configuration>
  <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
    </property>

    <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

    <property>
       <name>yarn.application.classpath</name>
       <value>
           %HADOOP_HOME%/etc/hadoop,
           %HADOOP_HOME%/share/hadoop/common/*,
           %HADOOP_HOME%/share/hadoop/common/lib/*,
           %HADOOP_HOME%/share/hadoop/hdfs/*,
           %HADOOP_HOME%/share/hadoop/hdfs/lib/*,
           %HADOOP_HOME%/share/hadoop/mapreduce/*,
           %HADOOP_HOME%/share/hadoop/mapreduce/lib/*,
           %HADOOP_HOME%/share/hadoop/yarn/*,
           %HADOOP_HOME%/share/hadoop/yarn/lib/*
       </value>
    </property>

</configuration>

1.3 link missed executable binary:
sudo ln -s /usr/bin/java /bin/java
sudo ln -s /usr/local/bin/mysqldump /usr/bin/mysqldump

1.4 run import command under Sqoop folder:
./bin/sqoop import --connect jdbc:mysql://127.0.0.1:3306/sample --username root --table employers --direct -m 1 --target-dir /user/YOUR_NAME/employers

1.4.1 or use import config file:
./bin/sqoop --options-file /usr/local/sqoop/import.txt --table employers -m 1 --target-dir /user/YOUR_NAME/sample/employers --check-column id --incremental append 
import.txt:
----------------
import
--connect
jdbc:mysql://127.0.0.1/db
--username
root

1.4.2   --fields-terminated-by '|'
check "Output line formatting arguments" for more format usage

1.5 incremental import based on column value or modify date.

./bin/sqoop import --connect jdbc:mysql://127.0.0.1:3306/sample --username root --table employers --direct -m 1 --target-dir /user/YOUR_NAME/employers --check-column id --incremental append --last-value 118996

Monday, August 3, 2015

Loading MySQL data into Spark

reference: http://www.sparkexpert.com/2015/03/28/loading-database-data-into-spark-using-data-sources-api/

1. Download spark-1.3.1-hadoop-2.6.
2. Download source code: https://github.com/sujee81/SparkApps
3. Import "spark-load-from-db" project.
4. Modified pom.xml.
   <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.sparkexpert</groupId>
    <artifactId>spark-load-from-db</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <spark.version>1.3.1</spark.version>
        <mysql.version>5.1.25</mysql.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>

        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>${mysql.version}</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.2</version>
                <configuration>
                    <source>1.7</source>
                    <target>1.7</target>
                    <compilerArgument>-Xlint:all</compilerArgument>
                    <showWarnings>true</showWarnings>
                    <showDeprecation>true</showDeprecation>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

5. Build and Run.

Setup Spark with Hadoop on Mac

1. Go to download page: http://spark.apache.org/downloads.html and selected spark-1.3.1-bin-hadoop2.6.tgz
2. After download, clicked binary to unzip.
3. link binary to specific path. >ln -s DOWNLOAD/spark-1.3.1-bin-hadoop2.6 /usr/local/spark
4. start Hadoop. > start-dfs.sh
5. start Spark. > ./bin/spark-shell
6. visit Spark web ui. http://localhost:4040/jobs/

Thursday, July 16, 2015

create sample wordcount jar in IntelliJ, run it against hdfs

1. Open IntelliJ, created a new command line project.
2. In WordCount.java, copy following code:
import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

    public static class TokenizerMapper
            extends Mapper{

        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context
        ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                word.set(itr.nextToken());
                context.write(word, one);
            }
        }
    }

    public static class IntSumReducer
            extends Reducer {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable values,
                           Context context
        ) throws IOException, InterruptedException {
            int sum = 0;
            for (IntWritable val : values) {
                sum += val.get();
            }
            result.set(sum);
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "word count");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}
3. Import dependencies.
'Command' + ';' => click 'Modules' => select 'SDK 1.7' => click '+', select 'Libraries' => "New Libraries" => 'From Mavon' => Search "hadoop-core" => select "org.apache.hadoop:hadoop-core:1.2.0".
4. Build a jar.
'Command' + ';' => click "Artifacts" => click '+' => 'JAR' => 'From modules with dependences...'
5. Run
hdfs dfs -put A-LOCAL-FOLDER input
hadoop jar out/artifacts/wordcount_jar/wordcount.jar input output

setup hadoop single node cluster on mac

referenceApache Hadoop - Setting up a Single Node Cluster

After setup in reference above is done, better to update hadoop config to make it works well after reboot.
1. specified space for namenode and datanode, or they will be put in /tmp, which will be lost after reboot.
$ vi etc/hadoop/hdfs-site.xml
added following property:
    <property>
       <name>dfs.namenode.name.dir</name>
       <value>~/hadoop/namenode</value>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>~/hadoop/data</value>
    </property>
2. disabled all permission check.
$ vi etc/hadoop/hdfs-site.xml
<property>
<name>dfs.permissions</name> 
<value>false</value>
</property>
3. format namenode
$ hdfs namenode -format
4. start hadoop and yarn
$ start-dfs.sh && start-yarn.sh
5. visit web ui
http://localhost:50070/
http://localhost:8088/