Relevance redefined

download Relevance redefined

of 51

  • date post

    24-Jan-2015
  • Category

    Technology

  • view

    1.006
  • download

    2

Embed Size (px)

description

Presentation at IGeLU 2014 Oxford. Shift view on relevance in discovery tools from system to user context. The concept of relevance in retrieving information resources in discovery tools like Primo needs to be redefined. It should take into account the wider context of queries and retrieved items outside the central and local indexes. Content and relevance are inextricably interlinked. Relevance is only calculated for the isolated items in the indexed content. Many indexed items may have relevant connections to each other in the real world, but these are not visible within the system in any way. Starting point should be the customer’s full connected workflow instead of just the library’s collections. Linked Open Data appears to be a relevant approach. This presentation will give some real life use cases and try to give some tentative solutions.

Transcript of Relevance redefined

  • 1. Deviance Nerd Feeler Declared Refine EvenCandid Reefer Eleven Fleeced Reindeer VanFreelance Never Died Deliverance Need RefDereference And Live End Free DeliveranceDeliverance Nerd Fee Deface Vender RelineRefaced Relevance Veneered Nil Refaced redefinedRelined EvenVile Acne Feeder Nerd Declare Define NerveCleared Define Never Fancied Reverend LeeLukas KosterDance Fender Relieve Canned Refereed VileLibrary of the University of AmsterdamCanned Refereed Evil Canned Deliverer Fee@lukaskCanned Relieved Free Freelance Need DriveIrrelevance Feed End File And Decree NerveCard Need Relief IGeLU Even 2014 Revealed - OxfordNice Fender

2. Main discovery tool feedback issuesContentNot enoughToo manyWrong typesNo full textRelevanceNot #1Too manyKnown item!?WTF?Main issues reported as feedback/survey results on discovery tools are about contentand relevance.Funny thing is that the use of facets for refining is somehow not very popular? 3. Usual responses to feedback issuesChange the front end!Tabs - Facets - Filters - - FontPositionsMore/less content!More of the same same sameImprove relevance ranking algorithms!Very shhhophisticated - Very shhhecretUsual responses to feedback issues: Front end, content, relevance (ranking)Front end UI changes: its just about cosmetics and perception: more tabs withspecific data sources, element positioning, etc.More or less content can have various effects. Either more or less relevant results.But usually still the same traditional content typesAlgorithm: only for a small part influenced by libraries, customers. Most of it in thesoftware, which is confidential, not transparent, for competitive reasons 4. BeforeExample: before.University of Amsterdam Primo: originally Google experience: onebox (apart from advanced search), all sources, one blended results list. 5. AfterExample: after. University of Amsterdam Primo now: three tabs: All, Local catalogue,Primo Central 6. Same oldSame old UX tricksSame old Content typesSame old View on relevanceBasically these changes are not actual changes at all: its all cosmetic.UX/UI changes: perception, not actual improvement of relevance.Content: usually still the same resource types+search indexesRelevance from a system +result set perspective 7. iNTERLiNKEDRS E A R C H CL A OE N NV K TA EN XC O N T E N TEBut every aspect is dependent on all others: search, rank, content, context, relevance.No search without context. Search is executed in a specific limited content index.Ranking is performed on the results within this limited index. Relevance is completelydefined by a user's context. 8. RelevanceContext+ContentObjectively, we can say that relevance is determined by context and content. 9. Relevance=Relative:Subjective:ContextualPerson ContextRoleTaskGoalNeedWorkflowSystem AlgorithmContentIndexQueryCollectionConfigurationClash between personal context and system/collection.Personal context, defined by a person's specific needs in a specific role for a specifictask/goal in a specific time, culminates in a specific Query, which consists of a limitednumber of words in character string format.System doesnt know personal context, only has the indexed content, made up fromspecific collections, that is indexed in a certain way with specific systemconfigurations, and the string based query to run through that structure. 10. RelevanceRecallThe fraction of relevantinstances that are retrievedretrieved relevant instancestotal relevant instancesPrecisionThe fraction of retrievedinstances that are relevantretrieved relevant instancestotal retrieved instancesBasic concepts used for determining relevance of result sets: Recall and Precision.This cannot be used to determine actual relevance of specific results! That isdependent on context and can only be determined by the user. 11. Total: 1000 itemsRelevant: 300Retrieved: 180Retrieved relevant: 120Retrieved unrelevant: 60Unrelevant: 700Recall:120/300=0.4Precision:120/180=0.66RelevanceRecall and PrecisionExample:Searched index: 1000 itemsRelevant for query: 300Retrieved items: 180Retrieved relevant items: 120Retrieved unrelevant items: 60Recall=120/300= (0.4)Precision=120/180= (0.66) 12. Relevance ranking is NOT RelevanceRelevance = Finding appropriate itemsRecall, PrecisionRelevance ranking = Determining most relevantwithin retrieved setTerm Frequency, Inverse Document Frequency, Proximity,Value ScoreRetrieved set may not contain any relevant items atall, but can still be ordered according to relevance.Relevance is NOT relevance ranking!Relevance is finding/retrieving appropriate items, using the words in the query, and ifavailable: context information.Recall and Precision are used to measure the degree of relevance of a result set.Relevance ranking is determining the most relevant items in a result set based on thequery terms and the content of retrieved items, using a number of standardmeasures:TF, IDF, Proximity.Value score: a specific Primo algorithm that is looking at number of words, type ofwords, etc.Also possible: local boosting. This method does not take into account any contentrelevance, but just uses brute force to promote items from specific (local) datasources. 13. Primo Central search and rankingenhancement - July 8, 2014As part of our continuing efforts to enhance searchand ranking performance in Primo, we changed theway Primo finds matches for keyword searcheswithin indexed full text. As part of this approachPrimo lowers the ranking of, or excludes, items oflow relevance from the result set that werepreviously included. You may find as part of thischange that the number of results for somesearches is reduced, although result lists havebecome more meaningful.Official Ex Libris announcement July 8, 2014.Combined with improvements to known item search/incomplete query terms in Primo4.7.Something changed!? This announcement implies mixing up of getting relevantresults and relevance ranking. Some results are actually excluded.Only for full text resources.Only in Primo Central.Not clear if this is independent of software version/SP?Unclear to libraries, customers what and how relevance/search/ranking are modified:an example of the not transparent nature of discovery tools' relavance algorithms.There were a number of complaints on the Primo mailing list about this. 14. The System PerspectiveObjectivizing a subjective experienceLet's look at the traditional system perspective on relevance. It's trying to make asubjective process into an objective one. 15. Recall issuesDiscovery tool index limits recall scope in advanceRelevance is calculated on:availableselectedindexed(scholarly) contentBy vendorsBy librariesEverythingSystemFirst lets have a look at some recall issues in discovery tools.Recall is limited in advance, because only a limited set of items of certain contenttypes are available for searching. A lot of relevant content is not considered at all.Decided by vendors, publishers and libraries.In Primo Central: by Ex Libris agreements with publishers, metadata vendors.In Primo Central: libraries decide what is enabled, what is subscribed, free for searchIn Primo Local: libraries decide which (part of) collections are indexed. 16. Recall issuesNOT indexed:Not accessibleNot subscribedNot enabledUnusual resource typesConnectionsNot digitalNot indexed, thus not searched:Content not accessible to index vendors, librariesUnusual resource types: theatre performances, television interviews, research projectinformation, historical eventsNot physical, tangible content.Connections: influenced by, collaborating with, temporal, genreMay not fit in bibliographical/proprietary format (MARC, DC, PNX) 17. Recall issuesIndexed, but NOT found:By author name (string based)By subject (string based, languages)Related but unlinked items (chapter in book)Content that IS indexed, but can't be found:Author names: only strings, textual variations of name/pseudonyms, etc. that areindexed. Only items with explicit author search term are found.Subject: strings, individually indexed 'as is' from data sources, multiple languages.Only items with explicit specific subject search term are found.Related: a chapter may be indexed with a textual reference to the book it is a part of.The book (relevant for delivery) is not retrieved, neither a link to that item. 18. AuthorAuthor name example.Charlotte Bront pseudonym/pen name Currer Bell (male) used for Jane Eyre. (Leftscreenshot Wikipedia)In this case no links between both names, so the very relevant Charlotte Bront stuffis not retrieved. (Right screenshot University of Amsterdam Primo) 19. SubjectSubject example.Topic/discipline philosophy (English) does not find stuff with Dutch filosofie (whichalso appears to be Czech). 20. ChapterConnections example.Chapter written by UvA researcher, in local institutional repository, harvested in localPrimo.Book in Aleph catalogue, harvested in local Primo.Book is not retrieved as item to present delivery options directly. 21. Precision issuesDiscovery tool limits precision by ambiguousindexingNext: some precision issues.Problems caused by using strings instead of identifiers/concepts 22. Precision issuesIndexed and/but erroneously foundBy author name (string based)By subject (string based, languages)Query too broadIndexed irrelevant items that are retrieved erroneously:Author: common names result in items of all authors with that name.Subject: similar terms with different/ambiguous meanings give noise (voc)Broad query (few terms) gives too much noise 23. AuthorExample of author names.J. (Johanna, Jan, Joop, etc.) de Vries is a very common Dutch name.Results consist of all items by different authors. 24. SubjectExample of subjects.Ambiguous/Multilingual topic VOC: physics (Volatile Organic Compounds), music(Vocals), history (Verenigde Oostindische Compagnie, Dutch East Indies Company). 25. Too broadExample of too broad search terms.Way too many results with a very common search term.