中文分词之Java实现使用IK Analyzer实现

ReganHoo 2012-11-30

http://blog.csdn.net/lijun7788/article/details/7719166#

IK Analyzer是基于lucene实现的分词开源框架,下载路径:http://code.google.com/p/ik-analyzer/downloads/list

需要在项目中引入:

IKAnalyzer.cfg.xml

IKAnalyzer2012.jar

lucene-core-3.6.0.jar

stopword.dic

什么都不用改

示例代码如下(使用IK Analyzer): 

package com.haha.test;  
  
import java.io.IOException;  
import java.io.StringReader;  
import org.apache.lucene.analysis.Analyzer;  
import org.apache.lucene.analysis.TokenStream;  
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;  
import org.wltea.analyzer.lucene.IKAnalyzer;  
  
public class Test2 {  
    public static void main(String[] args) throws IOException {  
        String text="基于java语言开发的轻量级的中文分词工具包";  
        //创建分词对象  
        Analyzer anal=new IKAnalyzer(true);       
        StringReader reader=new StringReader(text);  
        //分词  
        TokenStream ts=anal.tokenStream("", reader);  
        CharTermAttribute term=ts.getAttribute(CharTermAttribute.class);  
        //遍历分词数据  
        while(ts.incrementToken()){  
            System.out.print(term.toString()+"|");  
        }  
        reader.close();  
        System.out.println();  
    }  
  
}
 

运行后结果:

基于|java|语言|开发|的|轻量级|的|中文|分词|工具包|

使用(lucene)实现:

package com.haha.test;  
  
import java.io.IOException;  
import java.io.StringReader;  
  
import org.wltea.analyzer.core.IKSegmenter;  
import org.wltea.analyzer.core.Lexeme;  
  
public class Test3 {  
      
    public static void main(String[] args) throws IOException {  
        String text="基于java语言开发的轻量级的中文分词工具包";  
        StringReader sr=new StringReader(text);  
        IKSegmenter ik=new IKSegmenter(sr, true);  
        Lexeme lex=null;  
        while((lex=ik.next())!=null){  
            System.out.print(lex.getLexemeText()+"|");  
        }  
    }  
  
}
 

相关推荐