喜糖 2011-11-01
好久没有更新了,今天说说最近为这个项目加的一个新功能吧,即全文检索Lucene!至于Lucene到底是什么东西,大家可以在自己学习一下,我这里只说说是怎样将其配置到我的项目中的.大家如果对我这个项目不是很了解,可以先看看前面几个帖子.
其实说到Lucene,我也是第一次接触,以前听说过,但没有用过.搞了两天,先是看视频,看完了还是不会配到我的S2SH框架中,没办法,咱人笨.最后找项目,配合视频,终于搞定了.Lucene主要是操作其自己的索引,所以其实和数据库没有多少关系,但为了和我的项目框架保持协调性,我依然采用了dao-->service-->action这种模式,其实完全没有必要,我个人觉得有一个service层就足够了.好了,废话补多少了,直接上代码:
public class SearchDaoImpl implements SearchDao { public void save(Document doc, IndexWriter indexWriter) { try { indexWriter.addDocument(doc); } catch (Exception e) { e.printStackTrace(); }finally{ try { indexWriter.optimize(); } catch (Exception e) { e.printStackTrace(); } } } public void delete(Term term , IndexWriter indexWriter) { try { indexWriter.deleteDocuments(term); } catch (Exception e) { e.printStackTrace(); }finally{ try { indexWriter.close(); } catch (Exception e) { e.printStackTrace(); } } } public void update(Term term, Document doc, IndexWriter indexWriter) { try { indexWriter.updateDocument(term, doc); } catch (Exception e) { e.printStackTrace(); }finally{ try { indexWriter.close(); } catch (Exception e) { e.printStackTrace(); } } } //返回查询结果集 public TopDocs search(Query query, IndexSearcher indexSearcher) throws Exception { Filter filter = null; return indexSearcher.search(query, filter, 10000); }
大家可以看到,我其实连过滤器都没有写,主要就是为了熟悉这个框架和S2SH项目的整合使用.
public class ForumSearchServiceImpl implements ForumSearchService { private SearchDao searchDao; //保存索引 public void saveForumIndex(List forumList) throws Exception{ File indexFile = new File("D:\\index"); Analyzer analyzer = new PaodingAnalyzer(); IndexWriter indexWriter = new IndexWriter(indexFile, analyzer, true, MaxFieldLength.LIMITED); List<Forum> list =forumList; //对所有的论坛进行索引创建 for(Forum forum : list){ Document doc = new Document(); String id = forum.getId(); Long bid = forum.getBoard().getId(); String title = forum.getTitle(); String detail = forum.getDetail(); String postTime = BBSNSUtil.formatDateTime(new Date(forum.getPostTime()), Constant.formatDate); //因为要搜索和高亮,所以index是tokennized,TermVector是WITH_POSITIONS_OFFSETS doc.add(new Field("title", title, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS)); //利用htmlparser得到新闻内容html的纯文本 Parser parser = new Parser(); parser.setInputHTML(detail); String strings = parser.parse(null).elementAt(0).toPlainTextString().trim(); System.out.println("-------"+strings); if(!(strings.length()==0)){ doc.add(new Field("detail", strings, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS)); }else{ String str = "内容为视频或者图片,不包含文字"; doc.add(new Field("detail", str, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS)); System.out.println(str); } //添加时间至文档,因为要按照此字段降序排列排序,所以tokenzied,不用高亮所以TermVector是no就行了 doc.add(new Field("postTime", postTime, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.NO)); //添加主键至文档,不分词,不高亮。 doc.add(new Field("id" , id , Field.Store.YES , Field.Index.NO , Field.TermVector.NO)); doc.add(new Field("bid" , bid.toString() , Field.Store.YES , Field.Index.NO , Field.TermVector.NO)); searchDao.save(doc, indexWriter); } indexWriter.close(); } //搜索方法 public PageList searchFourm(String which, String keyWord, Pages pages) throws Exception{ System.out.println("pages.getSpage()="+pages.getSpage()); QueryResult queryResult = this.searchFourm(which, keyWord, pages.getSpage(), pages.getPerPageNum()); PageList pl = new PageList(); if(pages.getTotalNum()==-1){ pages.setTotalNum(queryResult.getRecordCount()); } pages.executeCount(); queryResult = this.searchFourm(which, keyWord, pages.getSpage(), pages.getPerPageNum()); System.out.println("queryResult.getRecordList()="+queryResult.getRecordList()); pl.setObjectList(queryResult.getRecordList()); pl.setPages(pages); return pl; } public QueryResult searchFourm(String which, String keyWord, int firstResult, int maxResult) throws Exception { File indexFile = new File("D:\\index"); IndexReader reader = IndexReader.open(indexFile); //庖丁解牛分词器 Analyzer analyzer = new PaodingAnalyzer(); //指定对content还是title进行查询 QueryParser queryParser = new QueryParser(which, analyzer); IndexSearcher indexSearcher = new IndexSearcher(reader); //对用户的输入进行查询 Query query = queryParser.parse(keyWord); TopDocs topDocs = searchDao.search(query, indexSearcher); int recordCount = topDocs.totalHits; //高亮htmlFormatter对象 SimpleHTMLFormatter sHtmlF = new SimpleHTMLFormatter("<font color='red'>", "</font>"); //高亮对象 Highlighter highlighter = new Highlighter(sHtmlF, new QueryScorer(query)); //设置高亮附近的字数 highlighter.setTextFragmenter(new SimpleFragmenter(100)); List<ForumSearch> recordList = new ArrayList<ForumSearch>(); // 3,取出当前页的数据 int end = Math.min(firstResult + maxResult, topDocs.totalHits); for(int i = firstResult; i < end; i++){ ScoreDoc scoreDoc = topDocs.scoreDocs[i]; int docSn = scoreDoc.doc;//文档内部编号 Document doc = indexSearcher.doc(docSn); // 根据编号取出相应的文档 //取得该条索引文档 ForumSearch fs = new ForumSearch(); String title = doc.get("title"); String detail = doc.get("detail"); String id = doc.get("id"); String bid = doc.get("bid"); String postTime = doc.get("postTime"); if(which.equals("title")){ String bestFragment = highlighter.getBestFragment(analyzer, which, title); //获得高亮后的标题内容 fs.setTitle(bestFragment); //如果内容不足100字,全部设置 if(detail.length()<100){ fs.setDetail(detail); }else{ fs.setDetail(detail.substring(0, 100)); } }else{ //如果查询内容 String bestFragment = highlighter.getBestFragment(analyzer, which, detail); //取得高亮内容并设置 fs.setDetail(bestFragment); fs.setTitle(title); } //设置日期 fs.setPostTime(postTime); fs.setId(id); fs.setBid(bid); recordList.add(fs); } return new QueryResult(recordCount, recordList); } public SearchDao getSearchDao() { return searchDao; } public void setSearchDao(SearchDao searchDao) { this.searchDao = searchDao; } }
ForumSearchServiceImpl只写了两个方法,一个就是创建索引,另一个就是搜索.其中为了返回数据方便,定义了一个QueryResult结构,里面只有两个变量,这里就不粘代码了,能看懂的人自然知道那两个变量是什么,注意,这里面的方法模仿了Hibernate分页,接下来将我的Action层的代码贴出来:
public String createIndex() throws Exception{ List<Forum> forumList = forumService.listForums(); forumSearchService.saveForumIndex(forumList); return SUCCESS; } public String search() throws Exception { Pages pages = new Pages(); pages.setPerPageNum(10); pages.setPage(this.getPage()); pages.setFileName(basePath + "forumSearch.bbsns?action=search"+"&keyWord="+keyWord+"&which="+which); this.setPageList(forumSearchService.searchFourm(which, keyWord, pages)); return "result"; }
其实至于搜索和创建索引,大家可以完善很多功能,比如多关键字,多条件查询,方法只要实现好,创建好索引就行了,可以实现高级搜索等功能.最近有点懒,大概写了这么点,哎...
好了,大概差不多了,就到这里吧,如果各位有什么问题,欢迎留言探讨.
原创首发,谢谢支持!