User:OrenBochman/Search/BrainStorm
Brainstorm Some Search Problems
editQueary Expansion
editProblem: HTML also contains CSS, HTML, Script, Comments
edit- solution:
Either index these too, or run a filter to remove them. Some Strategies are:- Discard all markup.
- A markup_filter/tokenizer could be used to discard markup.
- Lucene Tika project can do this.
- Other ready made solutions.
- Keep all markup
- Write a markup-analyzer that would be used to compress the page to reduce storage requirements.
(interesting if one wants to also compress output for integrating into DB or Cache.
- Write a markup-analyzer that would be used to compress the page to reduce storage requirements.
- Selective processing
- A table_template_map extension could be used in a strategy to identify structured information for deeper indexing.
- This is the most promising it can detect/filter out unapproved markup (Javascripts, CSS, Broken XHTML).
- Discard all markup.
NG Search Features
editProblem: Document Analysis is Language specific
editWikipedia documents come in over 200 languages. Language specific analyzer implementations only exist for limited number of languages. Most languages (synthetic and agglutinative) would be handled satisfacorily by N-Gram based analysis which can be done in in a language independent way.
By developing a cross languge anlyzer with language detection capabilities it would be possible to have the best of all worlds. It would detect the document language and each token's language. It would then defer treatment of the tokens to a language specific implementation. It would also satisfy lucene's limitation for requiring one analyzer per document.
Some projects, the wiktionaries for example can have 10+ languages represented in a single document.
Multilingual Analyzer
editLangauge can be
- Single - 98% confidence
- Confidence Map - sevral high confidence candidates (wikitionary 2 language doc)
- Mixture Model (bit vector) - for wiktionary (wikitionary 20 language doc, multilanguage message page)
- Undetected (binary,cdata,databased, svg, etc)
Requirements:
- Assymetrical API for Index / Search modes (takes a base language field)
- Token and Document variant
- Store Language info in field payload or in token type, Language Mixture & Confidence Score
- Extract features from query and check against model prepared of line.
- model would contain lexical feature such as:
- alphabet
- bi/trigram distribution.
- Stop lists; collection of common word/pos/language sets (or lemma/language)
- Normalized frequency statistics based on sampling full text from different languages..
- a light model would be glyph based.
Problem Morphological Variation in Language
editNon Technical Explanation
editto be: am; is; was; are; were; will be; would be; should be; used to be;
are different forms of the same word. A search engine is expected to treat all word forms as equivalent under normal circumstances.
Language with rich morphology (synthertic, agluative) have rich lexical morphological reducing both precision and recall. By treating words as lemmas (equivelency classes of words) it is possible to overcome this problem. However automaticaly mapping words into lemma is a non trivial task.[1]
Luckly the wiktionary porojects provide morphological details (Inflection). It should be possible to create a template which could be used to annotate lemmas in wiktionries (prototype avilable in my wiktionary page) These could be extracted by a lemma extracter and generelized with high confidence for undocumented parts of the language.
Lemma Extractor
edit- Mine Lemmas based on analysis of wiktionary. (partial sed exript exists).
- Enumerate/Annontate lemma members with morphological state for human debugging & for further NLP processing (optional).
- Bootstrap an automatical lemma analyzer using this method.
- use cross language tricks to check confidence (optional)
- provide feedback for wiktioanty use.
- post algorithmic overrides of errors.
Simple deliverables:
- Agluitive Morphology:
- S Stem list
- A Affix list
- M Mappings <S,A>
- Synthetic Morphology:
- S Stem list
- T Template list
- M Mapping <S,T>
Lemma Analzyer
edit4 main approches to Lemmas:
- generative grammar rule based list
- Pros
- Good as - Gold standard
- Cons
- Rules have exceptions
- Requires new work per per language
- Reqires dual expertise - Language specific Linguistics and Lexicography.
- Tool
- Hspell
- Pros
- hand crafted database - for example wiktionary/dbpedia
- Machine induction of morphology - for example wiktionary/dbpedia
- Pros
- once configured can be fully automated.
- objective criterion 1 : mdl.
- objective criterion 2 : gold standard.
- objective criterion 3 : other triage techniques.
- objective criterion 4 : semantics triage.
- Heristics based - needs parameter adjustment per language.
- induction suffixes, prefixes, simple signatures and complex signatures
- induction aliphoney via proximity heristic (needs added triage/inspection).
- induction of elision via a ^-1 (needs added triage/inspection).
- induction of doublication via a ^2 operator (needs triage/inspection).
- can be used to make spelling checkers.
- can be used to make an fsm version - see bellow.
- Cons
- Heristics based - needs review (can be outsourced to non experts or using more triage).
- Requires adjusting parameters to make new languages work.
- Reqires integration in form of a lucene version. Including support for different morphology versions based on growing corpus size.
- Does not analyze/tag states.
- Should processing NE and Personal Names differently.
- The Morphology is compression orientated and not search orientated and requires restructuring based on roots and pos or morphological states. (This looks like an oversight in the orignial design)
- Tools
- Linguistica
- Comments
- Can benefit from better triage
- semantical
- Can benefit from more complex heuristics (e.g.)
- considering ngram/skipgram eigenvalue matrix
- Phonological and Phonotactic considerations (vowel harmoney, consonet vowel @ start)
- Can benefit from Named entity resolution
- Can benefit from better triage
- Pros
- FSM based merphology
- Pros
- great performence
- great for packeging
- Cons
- Very complex file formats
- Require Lexicons
- Requires new work per per language
- Reqires triple expertise - Language specific Linguistics, Lexicography, FSM tech specific.
- Tool
- Apertium, their FSM tools project.
- Pros
TODO - orgenize
edit- Language with rich morphology this will reduce effectiveness of search. (e.g. Hebrew, Arabic, Hungarian, Swahili)
- Text Mine en.Wiktionary && xx.Wiktionary to for the data of a "lemma analyzer". (Store it in a table based on Apertium Morphlogical Dictionary format).
- Index xx.Wikipeia for frquency data and via a row/column algorithem to fill in the gaps of the Morphological Dictionary Table
- dumb lemma (bag with a representative)
- smart lemma (list ordered by frequency)
- quantum lemma (organized by morphological state and frequency)
- lemma based indexing.
- run a semantic disambiguation algorithm (tag )on disambiguate
- other benefits:
- lemma based compression. (arithmetic coding based on smart lemma)
- indexing all lemmas
- smart resolution of disambiguation page.
- algorithm translate English to simple English.
- excellent language detection for search.
- metrics:
- extract amount of information contributed by a user
- since inception.
- in final version.
How can search be made more interactive via Facets?
edit- SOLR instead of Lucene could provide faceted search involving categories.
- The single most impressive change to search could be via facets.
- Facets can be generated via categories (Though they work best in multiple shallow hierarchies).
- Facets can be generated via template analysis.
- Facets can be generated via semantic extensions. (explore)
- Focus on culture (local,wiki), sentiment(), importance, popularity (edit,view,revert) my be refreshing.
- Facets can also be generated using named entity and relational analysis.
- Facets may have substantial processing cost if done wrong.
- A Cluster map interface might be popular.
How Can Search Resolve Unexpected Title Ambiguity
edit- The The Art Of War proscribes the following advice "know the enemy and know yourself and you shall emerge victorious in 1000 searches". (Italics are mine).
- Google called it "I'm feeling lucky".
Ambiguity can come from:
- The Lexical form of the queary (bank - river, money)
- From the result domain - the top search result is an exact match of a disambiguation page.
In either case the search engine should be able to make a good (measured) guess as to what the user meant and give them the desired result.
The following data is avaiable:
- Squid Chace access is sampled at 1 to a 1000
- All edits are logged too.
Instrumenting Links
edit- If we wanted to collect intelligece we could instrument all links to jump to a redirect page which logs
<source,target,user/ip-cookie,timestamp> than fetches the required page.
- It would be interesting to have these stats for all pages.
- It would be realy interesting to have these stats for disambiguation/redirect pages.
- Some of this may be available from the site logs (are there any)
Use case 1. General browsing history stats available for disambiguation pages
editHere is a reolution huristic
- use inteligence vector of <target,frequency> to jump to the most popular (80% solution) - call it "I hate disambiguation" preference.
- use inteligence vector <source,target,frequency> to produce document term vector projections of source vs target to match most related source and traget pages. (should index source).
Use case 2. crowd source local interest
editSearch Patterns are often affected by televison etc. This call for analyzing search data and producing the following intelligence vector <top memes, geo location>. This would be produced every N<=15 minutes.
- use inteligence vector <source,target,target freshness,frequency> together with <top memes, geo location> if significant on the search term to steer to the current interest.
Use case 3. Use specific browsing history also available
edit- use <source,target,frequency> and as above but with a mememory <my top memes + edit history> weighed by time to fetch personalised search results.
How can search be made more relavant via Intelligence?
edit- Use current page (AKA refrerer)
- Use browsing history
- Use search history
- Use Profile
- API for serving ads/fundrasing
How Can Search Be Made More Relevant via meta data extraction ?
editWhile semenatic wiki is one approch to matadata collections, the Apache UIMA offers a possiblity of extraction of metadata from free text as well (as templates).
- entitiy detection.
Bugs
edit
References
edit- ↑ An Algorithm for the Unsupervised Learning of Morphology, John A. Goldsmith, http://hum.uchicago.edu/~jagoldsm/Papers/algorithm.pdf (accessed January 20, 2010).