Python
批量注释和解注释:control+/
批量缩进: tab
and tab+shift
Matlab
Tex studio
批量注释和解注释:command+T
+ command+U
If you have run Keras at least once, you will find the Keras configuration file at:
|
|
If it isn’t there, you can create it.
NOTE for Windows Users: Please change $HOME with %USERPROFILE%.
The default configuration file looks like this:
|
|
Simply change the field backend to either “theano” or “tensorflow”, and Keras will use the new configuration next time you run any Keras code.
You can also define the environment variable KERAS_BACKEND and this will override what is defined in your config file :
在某基类中声明为 virtual 并在一个或多个派生类中被重新定义的成员函数,用法格式为:virtual 函数返回类型 函数名(参数表) {函数体};实现多态性,通过指向派生类的基类指针或引用,访问派生类中同名覆盖成员函数
UDP is Connectionless Protocal
To establish a connection, TCP uses a three-way handshake. Before a client attempts to connect with a server, the server must first bind to and listen at a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate an active open. To establish a connection, the three-way (or 3-step) handshake occurs:
SYN: The active open is performed by the client sending a SYN to the server. The client sets the segment’s sequence number to a random value A.
SYN-ACK: In response, the server replies with a SYN-ACK. The acknowledgment number is set to one more than the received sequence number i.e. A+1, and the sequence number that the server chooses for the packet is another random number, B.
ACK: Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgement value i.e. A+1, and the acknowledgement number is set to one more than the received sequence number i.e. B+1.
At this point, both the client and server have received an acknowledgment of the connection. The steps 1, 2 establish the connection parameter (sequence number) for one direction and it is acknowledged. The steps 2, 3 establish the connection parameter (sequence number) for the other direction and it is acknowledged. With these, a full-duplex communication is established.
++i
and i++
? ++i
will increment the value of i, and then return the incremented value.
|
|
i++
will increment the value of i, but return the original value that i held before being incremented.
|
|
Given a binary tree, find its minimum depth.The minimum depth is the number of nodes along the shortest path from the root node down to the nearest leaf node.
java:
|
|
c++
|
|
Evaluate the value of an arithmetic expression in Reverse Polish Notation.
Valid operators are+,-,*,/
. Each operand may be an integer or another expression.
Some examples:
|
|
Given n points on a 2D plane, find the maximum number of points that lie on the same straight line.
|
|
Sort a linked list in O(n log n) time using constant space complexity.
|
|
Given an array of integers, find two numbers such that they add up to a specific target number.
The function twoSum should return indices of the two numbers such that they add up to the target, where index1 must be less than index2. Please note that your returned answers (both index1 and index2) are not zero-based.
You may assume that each input would have exactly one solution.
Input: numbers={2, 7, 11, 15}, target=9
Output: index1=1, index2=2
|
|
|
|
Given a binary tree, determine if it is height-balanced.
For this problem, a height-balanced binary tree is defined as a binary tree in which the depth of the two subtrees of every node never differ by more than 1.
|
|
Given a binary tree, return the preorder traversal of its nodes’ values.
For example:
Given binary tree{1,#,2,3}
,
|
|
return[1,2,3]
.
Note: Recursive solution is trivial, could you do it iteratively?
###Recursive
|
|
|
|
Implement atoi to convert a string to an integer.
Hint: Carefully consider all possible input cases. If you want a challenge, please do not see below and ask yourself what are the possible input cases.
Notes: It is intended for this problem to be specified vaguely (ie, no given input specs). You are responsible to gather all the input requirements up front.
spoilers alert… click to show requirements for atoi.
Requirements for atoi:
The function first discards as many whitespace characters as necessary until the first non-whitespace character is found. Then, starting from this character, takes an optional initial plus or minus sign followed by as many numerical digits as possible, and interprets them as a numerical value.
The string can contain additional characters after those that form the integral number, which are ignored and have no effect on the behavior of this function.
If the first sequence of non-whitespace characters in str is not a valid integral number, or if no such sequence exists because either str is empty or it contains only whitespace characters, no conversion is performed.
If no valid conversion could be performed, a zero value is returned. If the correct value is out of the range of representable values, INT_MAX (2147483647) or INT_MIN (-2147483648) is returned.
|
|
Merge two sorted linked lists and return it as a new list. The new list should be made by splicing together the nodes of the first two lists.
|
|
##
|
Hexo is a fast, simple and powerful blog framework. And we can use Markdown to write it, so it is very efficient. I migrate the old blogs to this theme.
more >>Required Java version:
In centos we can simply install this by command:
set JAVA_HOME
environment variables in .bashrc
:
|
|
Download from github and compile:
|
|
ADD bazel-bin/src/bazel
to .bashrc
In my server as:
|
|
Now, we have install the bazel in centos.
If you use Ubuntu OS, you can install it form apt repository. In this blog, I will not tell the detailed, which you can reference the link:
|
|
Clone the tensorflow from github:
|
|
|
|
Then, it will tell you which environment to set. such as:
|
|
When building from source, you will still build a pip package and install that.
|
|
If you’re working on TensorFlow itself, it is useful to be able to test your changes in an interactive python shell without having to reinstall TensorFlow.
To set up TensorFlow such that all files are linked (instead of copied) from the system directories, run the following commands inside the TensorFlow root directory:
|
|
Such as change the version from 0.10.0 to 0.11.0 filetensorflow/tools/pip_package/setup.py
:
|
|
Then, do the following:
Now, we can build our develop tensorflow:
|
|
|
|
We can find the version changed from the 0.10.0tc0
to 0.11.0rc0
.
official link:this.
TensorFlow™ is an open source software library for numerical computation using data flow graphs.
TensorFlow is for everyone. It’s for students, researchers, hobbyists, hackers, engineers, developers, inventors and innovators and is being open sourced under the Apache 2.0 open source license.
In April 13,2006.Distributed version of Tensorflow was published.
Announcing TensorFlow 0.8 – now with distributed computing support!
This blog is also reposted by Jeff Dean in G+ who is the Google Senior Fellow in the Systems Infrastructure Group.
I try the distributed version in 3 virtual machines on OpenStack platform which is a famous open source cloud OS.
|
|
|
|
|
|
|
|
|
|
Now we have setup the preparation for a demo. This demo is according to the tutorial of Tensorflow.
https://www.tensorflow.org/versions/r0.8/how_tos/distributed/index.html
A TensorFlow “cluster” is a set of “tasks” that participate in the distributed execution of a TensorFlow graph. Each task is associated with a TensorFlow “server”, which contains a “master” that can be used to create sessions, and a “worker” that executes operations in the graph. A cluster can also be divided into one or more “jobs”, where each job contains one or more tasks.
Create a tf.train.ClusterSpec
that describes all of the tasks in the cluster. This should be the same for each task.
Create a tf.train.Server
, passing the tf.train.ClusterSpec to the constructor, and identifying the local task with a job name and task index.
We define 2 worker node and 1 parameter server node as the following:
|
|
ps.py
as the following for parameter server:
|
|
|
|
It will listen on the 2222 port of master0 server.
In the two worker nodes, do the following steps:
worker0.py
as the following:
|
|
|
|
It will listen on the 2222 port of worker0 server.
worker1.py
as the following:
|
|
|
|
It will listen on the 2222 port of worker1 server.
|
|
When run the script, it will generate data in device: "/job:ps/task:0"
and train the data in device=/job:worker/task:1
as the following:
|
|
This blog is only a try of distributed tensorflow, there are many works need to do. such as how to combined working with GPU, how to benchmark and so on.
https://github.com/tensorflow/tensorflow
http://googleresearch.blogspot.com/2016/04/announcing-tensorflow-08-now-with.html
1 server with Linux kernel (demo is Ubuntu 14.04 LTS)
Follow the steps in virtualbox downloads
|
|
According to your distribution, replace ‘vivid’ by ‘utopic’, ‘trusty’, ‘raring’, ‘quantal’, ‘precise’, ‘lucid’, ‘jessie’, ‘wheezy’, or ‘squeeze’.
|
|
|
|
|
|
|
|
Then you can see the virtualbox graph interface:
For playing Tricircle, we need to install 3 nodes for devstack.One for the Top OpenStack, Two for cross pod bottom OpenStacks.
In order to make the VMs with multiple VLAN networks, then add 2 network devices for bridge use.
The eth0 is the default network with NAT methods.
The eth1 is the VLAN external network,and using bridge method, in my environment, I attached to the eth1.
Otherwise, The Ping test with VLAN tag from Node1 to Node2 will be blocked.
The same setting as the above, the Promiscuous Mode must be set to “Allow All”. And be in use after reboot.
There are many mirror sites in the world. I download a ubuntu-14.04-LTS in the Ali-OSM(Alibaba Open Source Mirror Site). Because it’s very fast in China.
Follow the steps while installing, because it’s easy, so it will be ignored.
After installation, you will see the console like this:
After installation of the VMs, we can login from SSH. So we need to let’s these VMs running in the background.
And so when we close the VMs, we need to choose the option:
The detailed methods can be seen in the OpenStack/Tricircle.
Install the openvswitch for creating bridges.
|
|
|
|
apt-get install sudo -y
echo “stack ALL=(ALL) NOPASSWD: ALL” >> /etc/sudoers
sudo apt-get install git -y
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
./stack.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password #change password as you set in your own environment
export OS_AUTH_URL=http://127.0.0.1:5000
export OS_IDENTITY_API_VERSION=3
#It’s very important to set region name to the top openstack, because tricircle has different API urls.
export OS_REGION_NAME=RegionOne
cd tricircle/devstack
chmod +x verify_top_install.sh
./verify_top_install.sh 2>&1 | tee logs
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password #change password as you set in your own environment
export OS_AUTH_URL=http://127.0.0.1:5000
export OS_IDENTITY_API_VERSION=3
#It’s very important to set region name to the top openstack, because tricircle has different API urls.
export OS_REGION_NAME=RegionOne
cd tricircle/devstack
chmod +x verify_top_install.sh
./verify_cross_pod_install.sh 2>&1 | tee logs
ping -c 4 10.0.2.3
ping -c 4 10.0.1.3
```
So the cross pod networking has been verified.
SODA比赛(Shanghai Open Data Apps)一共有超过400个参赛选手,对于比赛大数据的处理,需要强力的技术支持,我们为进入复赛的参赛队伍免费提供了两个数据平台,并分别导入大赛数据,供选手使用。它们分别是:
本文介绍 Gitlab 环境的部署过程、使用方法以及如何通过后台自动调用Spark集群进行数据计算。
SODA瓶之Spark环境请戳
GitLab,是一个利用 Ruby on Rails 开发的开源应用程序,实现一个自托管的Git项目仓库,可通过Web界面进行访问公开的或者私人项目。
它拥有与GitHub类似的功能,能够浏览源代码,管理缺陷和注释。可以管理团队对仓库的访问,它非常易于浏览提交过的版本并提供一个文件历史库。团队成员可以利用内置的简单聊天程序(Wall)进行交流。它还提供一个代码片段收集功能可以轻松实现代码复用,便于日后有需要的时候进行查找。
本次比赛的所有代码考虑到隐私性、安全性等因素,采用了完全由OMNILab搭建的Gitlab私有代码仓库。参赛选手可以将编写好的代码上传至比赛专用的Gitlab,然后可以通过Gitlab调用后台为比赛专门准备的Spark计算集群,进行数据计算。
具体代码仓库搭建过程如下:
安装包括ssh、email等服务
|
|
增加gitlab的package并且安装:
|
|
|
|
首先,通过正则表达式匹配,将KESCI提供的邮箱中的用户名进行提取:
|
|
分别对应生成密码,这里采用了linux下的makepassword工具包批量生成密码
|
|
采用gitlab的命令行工具进行批量的用户创建
gitlab 提供了一个可以通过private token的方式进行命令行创建用户等操作,这样省去了手动创建的操作。
我们通过以下两个脚本进行用户的批量创建:
|
|
|
|
Apache Spark是一个开源簇运算框架,最初是由加州大学柏克莱分校AMPLab所开发。相对于Hadoop的MapReduce会在运行完工作后将中介数据存放到磁盘中,Spark使用了存储器内运算技术,能在数据尚未写入硬盘时即在存储器内分析运算。Spark在存储器内运行程序的运算速度能做到比Hadoop MapReduce的运算速度快上100倍,即便是运行程序于硬盘时,Spark也能快上10倍速度。
本次比赛中OMNILab为参赛选手提供了强大的Spark集群:
tag:
缺失模块。
1、在博客根目录(注意不是yilia根目录)执行以下命令:
npm i hexo-generator-json-content --save
2、在根目录_config.yml里添加配置:
jsonContent: meta: false pages: false posts: title: true date: true path: true text: true raw: false content: false slug: false updated: false comments: false link: false permalink: false excerpt: false categories: false tags: true