在gdb中如何将所有线程的堆栈输出到文件中去

1.先在gdb中设置log文件的位置,同时打开log

2.使用 thread apply all bt输出所有的堆栈

例子:

下面是gdb attach到httpd后的堆栈

(gdb) set logging file /tmp/test.txt
(gdb) set logging on
Copying output to /tmp/test.txt.
(gdb) thread apply all bt

Thread 2 (Thread 0x41ec5940 (LWP 7312)):
#0  0x00002b6370d241c0 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00002aaab3bbf229 in ?? () from /usr/lib64/libnspr4.so
#2  0x00002aaab3bbfe69 in PR_WaitCondVar () from /usr/lib64/libnspr4.so
#3  0x00002aaab3bc51bc in PR_Sleep () from /usr/lib64/libnspr4.so
#4  0x00002aaab302750e in ?? () from /usr/lib64/libssl3.so
#5  0x00002aaab3bc55cd in ?? () from /usr/lib64/libnspr4.so
#6  0x00002b6370d1f77d in start_thread (arg=<value optimized out>) at pthread_create.c:301
#7  0x00002b637120d9ad in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x2b63731b4360 (LWP 4718)):
#0  0x00002b6371206b42 in select () from /lib64/libc.so.6
#1  0x00002b6370b10d25 in apr_sleep () from /usr/lib64/libapr-1.so.0
#2  0x00002b636f242315 in ap_wait_or_timeout ()
#3  0x00002b636f24b79e in ap_mpm_run ()
#4  0x00002b636f225fd8 in main ()

然后在/tmp/test.txt中可以自由的查找了。

refs:

http://stackoverflow.com/questions/18391808/getting-the-backtrace-for-all-
the-threads-in-gdb

http://www.delorie.com/gnu/docs/gdb/gdb_25.html

http://stackoverflow.com/questions/1707167/how-extract-text-from-gdb

下面这个连接有其它的debug技巧,比如如何debug X Error,

https://wiki.debian.org/HowToGetABacktrace

Java 范型中的super和extends的区别

其实一句话就是:

super最好用来作为输出参数,extends最好用来作为输入参数;

看看下面的例子:

private class A {
    //
}

private class B extends A {
    //
}

private class C extends B {
    //
}

public void test() {
    List<? super B> test = new ArrayList();
    test = new ArrayList<A>();
    test = new ArrayList<B>();
    test = new ArrayList<C>(); //不能接受比B还小的类

    A a = test.get(0);//不能get
    B b = test.get(0);//不能get
    C c = test.get(0);//不能get

    test.add(new A());//但是不能add比B大的类
    test.add(new B());
    test.add(new C());

    List<? extends B> test1 = new ArrayList();
    test1 = new ArrayList<A>();//不能接受比B大的类
    test1 = new ArrayList<B>();
    test1 = new ArrayList<C>();

    A a1 = test1.get(0);
    B b1 = test1.get(0);
    C c1 = test1.get(0);//但是不能get比B小的类

    test1.add(new A());//不能add
    test1.add(new B());//不能add
    test1.add(new C());//不能add

}

下面是在Eclipse中的编译错误截图:

具体的原因可以参见:

http://stackoverflow.com/questions/4343202/difference-between-super-t-and-
extends-t-in-java

http://docs.oracle.com/javase/tutorial/java/generics/capture.html

EMF中的containment reference和 non containment reference分别是什么

containment reference:简单的说就是在EReference中的containment变量是false。

更进一步:如果一个EBObject中有containment
reference的话,该引用指向的EObject的eContainer方法会返回它的container。这个container在持久化的时候会将containment
reference指向的EObjet也一起保存。

而non containment reference:两个EObject两个都相关,被指向的EObject的eContainer回返回null

ref: EMF: Eclipse Modeling Framework, Second Edition

Xtext是什么

在Xtext的官网中是这样说的

“Building your own domain-specific languages has never been so easy. Just put
your grammar in place and you not only get the working parser and linker but
also first class Eclipse support.”

翻译过来就是说创建你自己的DSL从来没有这么简单过。你只需要写好你的语法,然后剩下的事情交给Xtext来做(包括parser,linker和在Eclipse中的无缝支持)。

想要学习Xtext,你至少需要了解以下技术:

  1. Principles of grammar (parser(left associativity, precedence, etc ) and lexer), EBNF –> Write xtext

  2. Antlr –> generate parser

  3. mwe2

  4. EMF and Ecore –> in memory expression of model

  5. Dependency Injection and Google Guice.–> put it in together

  6. Developing using Eclipse –> IDE support

  7. Java language, libraries and JVM

  8. Xtend –> one DSL for made Java developer more easy

  9. Xbase –> make write xtext more easy

在进一步之前,先解释一下,什么是DSL,翻译过来就是某一个具体领域的语言。DSL开发者可以给某一个领域或者行业,开发一种在这个领域或者行业的专家

都能理解的方言,他们意识不到底层的具体实现,从而来专注于自己的本职工作,提高效率,隔离责任等。

Xtext就是来实现DSL的一个免费开源的项目。

使用Xtext你只需要编写Xtext文件,就是你的DSL的语法,和扩展Xtext提供给你的一些方法,就可以实现一个由Eclipse支持的使用DSL语言的开发环境,

这个环境可以向JDT或或者PDT一样,提供语法高亮,自动不全,代码辅助,语法错误检查和修改方法建议等功能。

Xtext的工作流程:

  • 编写Xtext文件
  • 通过mwe2生成所有你需要的代码的基本框架,这个框架就可以运行。其中包括EMF需要的Ecore和相应的对象,对应的编辑器,UT测试等。
  • 在运行的过程中,是通过Guice来讲个方面连接起来的

以后有时间的话,我会将各个部分连接起来说一说的。

http://www.euclideanspace.com/software/development/eclipse/xtext/index.htm

什么是 Eclipse Compiler for Java (ECJ)

An incremental Java compiler. Implemented as an Eclipse builder, it is based
on technology evolved from VisualAge for Java compiler.In particular, it
allows to run and debug code which still contains unresolvederrors.

Since 3.2, it is also available as a separate download. The name of the file
is ecj.jar . Its corresponding sourceis also available. To get them, go to
the download page and
search for the section JDT Core Batch Compiler . This jar contains
thebatch compiler and the javac ant adapter.

The technical challenge for incremental building is to determine exactly what
needs to be rebuilt. For example, theinternal state maintained by the Java
builder includes things like a dependencygraph and a list of compilation
problems reported. This information is used during an incremental build to
identify which classes need to be recompiled in response to a change in a Java
resource.


http://help.eclipse.org/indigo/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Ftasks%2Ftask-
using_batch_compiler.htm


http://download.eclipse.org/eclipse/downloads/drops4/R-4.4-201406061215/#JDTCORE


http://help.eclipse.org/luna/index.jsp?topic=%2Forg.eclipse.jdt.doc.isv%2Fguide%2Fjdt_api_compile.htm

http://stackoverflow.com/questions/3061654/what-is-the-difference-between-
javac-and-the-eclipse-compiler

如何运行Hadoop例子中的wordcount

1. 新建一个文件 test.txt

2. 将它copy到HDFS上; bin/hadoop dfs -put test.txt test.txt

3.运行它.

bin/hadoop jar hadoop-0.18.0-examples.jar wordcount -m 1 -r 2 test.txt out

4. 查看结果

4.1 可以使用 bin/hadoop dfs -cat yourfilename

4.2 使用浏览器查看(你可以从hadoop-
default.xml中获得NameNode的http端口,使用ifconfig获得NameNode的IP地址)

NameNode的主页面

out目录的结果

refs:

great hadoop tutorial:

https://developer.yahoo.com/hadoop/tutorial/index.html

HDFS简介

发送地方

This material provides an overview of the HDFS (Hadoop Distributed File
System) architecture and is intended for contributors. The goal of this
document is to provide a guide to the overall structure of the HDFS code so
that contributors can more effectively understand how changes that they are
considering can be made, and the consequences of those changes. The assumption
is that the reader has a basic understanding of HDFS, its purpose, and how it
fits into the Hadoop project suite.

The Hadoop project URL is http://hadoop.apache.org/hdfs/ . An overview of HDFS can be found at
http://hadoop.apache.org/docs/stable/hdfs_user_guide.html
. Other useful
background references are “The Google File System” (
http://research.google.com/archive/gfs.html
), “The Hadoop Distributed File
system” ( http://storageconference.org/2010/Papers/MSST/Shvachko.pdf ), and “The
Hadoop Distributed File System: Architecture and Design” (
http://hadoop.apache.org/docs/r0.18.0/hdfs_design.pdf
).

http://itm-vm.shidler.hawaii.edu/HDFS/ArchDoc.html