Cloud computing

【原创】SAE本地环境与真实环境的差别

其实从刚开始用的时候就发现本地环境和真实环境有不同,导致本地环境我都没怎么用。

1、本地环境的php是32bit版本的,真实环境是64bit的。这是我发现的第一个不相同的地方,就是因为这个,直接导致我的应用绝大部分调试工作都是在把代码上传到服务器后直接在线测试的。因为我做那个应用的时候需要用到64位整数,本地环境不支持。

2、saemysql使用的不同,今天在本地用了一下saemysql,本来以为直接拿以前线上的代码直接在本地环境中运行就可以的,结果发现不行。在线上的版本,只要直接

Hadoop Hbase 升级

Hadoop HDFS 与Hbase升级笔记

由于之前使用了hadoop1.0.2,hbase 使用的是hbase-0.92.1 但是一次事故导致元数据丢失,且修复元数据的类本身有BUG
所以摆在眼前的只有两条路:
1、修改hbase源码重新编译 hbase 修复BUG
2、升级到下一个版本,且这个版本已经修复了此BUG 从release node中看到 0.92.2及以后版本均修复了此bug
  所以决定升级到最新的稳定版 hbase-0.94.3 而此版本的hbase 和hadoop-1.0.4 的兼容性最好,所以hadoop 连带升级到hadoop-1.0.4

1. Hadoop升级步骤:
 (1)停止集群上的所有MR任务,包括Hbase(如果Hbase在使用中,先停掉,接着是zookeeper)
 (2)停止DFS(1、2两步也可以在hbase和zookeeper关闭后使用stop-all.sh脚本直接关闭)
 (3)删除临时数据,即在core-site.xml中设置的hadoop.tmp.dir的value所文件目录下的文件
 (4)备份HDFS元数据

java socket 多线程网络传输多个文件

     由于需要研究了下用 java socket 传输文件,由于需要传输多个文件,因此,采用了多线程设计。客户端每个线程创建一个 socket 连接,每个 socket 连接负责传输一个文件,服务端的ServerSocket每次 accept 一个 socket 连接,创建一个线程用于接收客户端传来的文件。

1、服务端

Resin.io releases experimental support for the Intel® Edison

Managing a fleet of IoT devices and deploying code is no easy task. Resin.io changes the workflow by leveraging Git and Docker technology!

How It Works

When you have new code for your end devices, all you need to do is simply perform a "git push". Resin.io builds your code into a Docker container and deploys it onto the device if/when it's online! Below is an image describing the process, found on Resin.io's website:

Restudy SchemaRDD in SparkSQL

At the very beginning, SchemaRDD was just designed as an attempt to make life easier for developers in their daily routines of code debugging and unit testing on SparkSQL core module. The idea can boil down to describing the data structures inside RDD using a formal description similar to the relational database schema. On top of all basic functions provided by common RDD APIs, SchemaRDD also provides some straightforward relational query interface functions that are realized through SparkSQL. After several releases and updates, SchemaRDD successfully drew attention among developers in Spark community. Now, it is officially renamed to “DataFrame” API on Spark’s latest trunk. This article starts with background of SchemaRDD, and then analyzes its design principles and application characteristics. Finally, it gives a brief review of SchemaRDD’s history, and makes a general discussion on its application prospects in Spark’s future development trends.
Iscriversi a Cloud computing