From Wikipedia, the free encyclopedia

Expertise finding is the use of tools for finding and assessing individual expertise. In the recruitment industry, expertise finding is the problem of searching for employable candidates with certain required skills set. In other words, it is the challenge of linking humans to expertise areas, and as such is a sub-problem of expertise retrieval (the other problem being expertise profiling). [1]

Importance of expertise

It can be argued that human expertise [2] is more valuable than capital, means of production or intellectual property.[ citation needed] Contrary to expertise, all other aspects of capitalism are now relatively generic: access to capital is global, as is access to means of production for many areas of manufacturing. Intellectual property can be similarly licensed. Furthermore, expertise finding is also a key aspect of institutional memory, as without its experts an institution is effectively decapitated. However, finding and "licensing" expertise, the key to the effective use of these resources, remain much harder, starting with the very first step: finding expertise that you can trust.

Until very recently, finding expertise required a mix of individual, social and collaborative practices, a haphazard process at best. Mostly, it involved contacting individuals one trusts and asking them for referrals, while hoping that one's judgment about those individuals is justified and that their answers are thoughtful.

In the last fifteen years, a class of knowledge management software has emerged to facilitate and improve the quality of expertise finding, termed "expertise locating systems". These software range from social networking systems to knowledge bases. Some software, like those in the social networking realm, rely on users to connect each other, thus using social filtering to act as "recommender systems".

At the other end of the spectrum are specialized knowledge bases that rely on experts to populate a specialized type of database with their self-determined areas of expertise and contributions, and do not rely on user recommendations. Hybrids that feature expert-populated content in conjunction with user recommendations also exist, and are arguably more valuable for doing so.

Still other expertise knowledge bases rely strictly on external manifestations of expertise, herein termed "gated objects", e.g., citation impacts for scientific papers or data mining approaches wherein many of the work products of an expert are collated. Such systems are more likely to be free of user-introduced biases (e.g., ResearchScorecard ), though the use of computational methods can introduce other biases.

There are also hybrid approaches which use user-generated data (e.g., member profiles), community-based signals (e.g., recommendations and skill endorsements), and personalized signals (e.g., social connection between searcher and results).

Examples of the systems outlined above are listed in Table 1.

Table 1: A classification of expertise location systems

Type Application domain Data source Examples
Social networking Professional networking User-generated and community-generated
Scientific literature Identifying publications with strongest research impact Third-party generated
Scientific literature Expertise search Software
Knowledge base Private expertise database User-Generated
  • MITRE Expert Finder (MITRE Corporation)
  • MIT ExpertFinder (ref. 3)
  • Decisiv Search Matters & Expertise ( Recommind, Inc.)
  • ProFinda (ProFinda Ltd)
  • Skillhive (Intunex)
  • Tacit Software (Oracle Corporation)
  • GuruScan (GuruScan Social Expert Guide)
Knowledge base Publicly accessible expertise database User-generated
Knowledge base Private expertise database Third party-generated
  • MITRE Expert Finder (MITRE Corporation)
  • MIT ExpertFinder (ref. 3)
  • MindServer Expertise ( Recommind, Inc.)
  • Tacit Software
Knowledge base Publicly accessible expertise database Third party-generated
  • ResearchScorecard (ResearchScorecard Inc.)
  • authoratory.com
  • BiomedExperts (Collexis Holdings Inc.)
  • KnowledgeMesh (Hershey Center for Applied Research)
  • Community Academic Profiles (Stanford School of Medicine)
  • ResearchCrossroads.org (Innolyst, Inc.)
Blog search engines Third party-generated

Technical problems

A number of interesting problems follow from the use of expertise finding systems:

  • The matching of questions from non-expert to the database of existing expertise is inherently difficult, especially when the database does not store the requisite expertise. This problem grows even more acute with increasing ignorance on the part of the non-expert due to typical search problems involving use of keywords to search unstructured data that are not semantically normalized, as well as variability in how well an expert has set up their descriptive content pages. Improved question matching is one reason why third-party semantically normalized systems such as ResearchScorecard and BiomedExperts should be able to provide better answers to queries from non-expert users.
  • Avoiding expert-fatigue due to too many questions/requests from users of the system (ref. 1).
  • Finding ways to avoid "gaming" of the system to reap unjustified expertise credibility.
  • Infer expertise on implicit skills. Since users typically do not declare all of the skills they have, it is important to infer their implicit skills that are highly related their explicit ones. The inference step can significantly improve recall in expertise finding.

Expertise ranking

Means of classifying and ranking expertise (and therefore experts) become essential if the number of experts returned by a query is greater than a handful. This raises the following social problems associated with such systems:

  • How can expertise be assessed objectively? Is that even possible?
  • What are the consequences of relying on unstructured social assessments of expertise, such as user recommendations?
  • How does one distinguish authoritativeness as a proxy metric of expertise from simple popularity, which is often a function of one's ability to express oneself coupled with a good social sense?
  • What are the potential consequences of the social or professional stigma associated with the use of an authority ranking, such as used in Technorati and ResearchScorecard)?
  • How to make expertise ranking personalized to each individual searcher? This is particularly important for recruiting purpose since given the same skills, recruiters from different companies, industries, locations might have different preferences for candidates and their varying areas of expertise. [3]

Sources of data for assessing expertise

Many types of data sources have been used to infer expertise. They can be broadly categorized based on whether they measure "raw" contributions provided by the expert, or whether some sort of filter is applied to these contributions.

Unfiltered data sources that have been used to assess expertise, in no particular ranking order:

  • self-reported expertise on networking platforms
  • expertise sharing through platforms
  • user recommendations
  • help desk tickets: what the problem was and who fixed it
  • e-mail traffic between users
  • documents, whether private or on the web, particularly publications
  • user-maintained web pages
  • reports (technical, marketing, etc.)

Filtered data sources, that is, contributions that require approval by third parties (grant committees, referees, patent office, etc.) are particularly valuable for measuring expertise in a way that minimizes biases that follow from popularity or other social factors:

  • patents, particularly if issued
  • scientific publications
  • issued grants (failed grant proposals are rarely known beyond the authors)
  • clinical trials
  • product launches
  • pharmaceutical drugs

Approaches for creating expertise content

  • Manual, either by experts themselves (e.g., Skillhive) or by a curator (Expertise Finder)
  • Automated, e.g., using software agents (e.g., MIT's ExpertFinder) or a combination of agents and human curation (e.g., ResearchScorecard )
  • In industrial expertise search engines (e.g., LinkedIn), there are many signals coming into the ranking functions, such as, user-generated content (e.g., profiles), community-generated content (e.g., recommendations and skills endorsements) and personalized signals (e.g., social connections). Moreover, user queries might contain many other aspects rather required expertise, such as, locations, industries or companies. Thus, traditional information retrieval features like text matching are also important. Learning to rank is typically used to combine all of these signals together into a ranking function [3]

Collaborator discovery

In academia, a related problem is collaborator discovery, where the goal is to suggest suitable collaborators to a researcher. While expertise finding is an asynchronous problem (employer looking for employee), collaborator discovery can be distinguished from expertise finding by helping establishing more symmetric relationships (collaborations). Also, while in expertise finding the task often can be clearly characterized, this is not the case in academic research, where future goals are more fuzzy. [4]

References

  1. ^ Balog, Krisztian (2012). "Expertise Retrieval". Foundations and Trends in Information Retrieval. 6 (2–3): 127–256. doi: 10.1561/1500000024.
  2. ^ Njemanze, Ikenna (2016). "What Does Being a Strategic HR business Partner Look Like in Practice?". Archived from the original on June 21, 2018. Retrieved August 21, 2022.
  3. ^ a b c Ha-Thuc, Viet; Venkataraman, Ganesh; Rodriguez, Mario; Sinha, Shakti; Sundaram, Senthil; Guo, Lin (2015). "Personalized expertise search at Linked In". 2015 IEEE International Conference on Big Data (Big Data). pp. 1238–1247. arXiv: 1602.04572. doi: 10.1109/BigData.2015.7363878. ISBN  978-1-4799-9926-2. S2CID  12751245.
  4. ^ Schleyer, Titus; Butler, Brian S.; Song, Mei; Spallek, Heiko (2012). "Conceptualizing and advancing research networking systems". ACM Transactions on Computer-Human Interaction. 19 (1): 1–26. doi: 10.1145/2147783.2147785. PMC  3872832. PMID  24376309.

Further reading

  1. Ackerman, Mark and McDonald, David (1998) "Just Talk to Me: A Field Study of Expertise Location" Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work.
  2. Hughes, Gareth and Crowder, Richard (2003) "Experiences in designing highly adaptable expertise finder systems" Proceedings of the DETC Conference 2003.
  3. Maybury, M., D'Amore, R., House, D. (2002). "Awareness of organizational expertise." International Journal of Human-Computer Interaction 14(2): 199-217.
  4. Maybury, M., D'Amore, R., House, D. (2000). Automating Expert Finding. International Journal of Technology Research Management. 43(6): 12-15.
  5. Maybury, M., D'Amore, R, and House, D. December (2001). Expert Finding for Collaborative Virtual Environments. Communications of the ACM 14(12): 55-56. In Ragusa, J. and Bochenek, G. (eds). Special Section on Collaboration Virtual Design Environments.
  6. Maybury, M., D'Amore, R. and House, D. (2002). Automated Discovery and Mapping of Expertise. In Ackerman, M., Cohen, A., Pipek, V. and Wulf, V. (eds.). Beyond Knowledge Management: Sharing Expertise. Cambridge: MIT Press.
  7. Mattox, D., M. Maybury, et al. (1999). "Enterprise expert and knowledge discovery". Proceedings of the 8th International Conference on Human-Computer Interactions (HCI International 99), Munich, Germany.
  8. Tang, J., Zhang J., Yao L., Li J., Zhang L. and Su Z.(2008) "ArnetMiner: extraction and mining of academic social networks" Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining.
  9. Viavacqua, A. (1999). "Agents for expertise location". Proceedings of the 1999 AAAI Spring Symposium on Intelligent Agents in Cyberspace, Stanford, CA.
From Wikipedia, the free encyclopedia

Expertise finding is the use of tools for finding and assessing individual expertise. In the recruitment industry, expertise finding is the problem of searching for employable candidates with certain required skills set. In other words, it is the challenge of linking humans to expertise areas, and as such is a sub-problem of expertise retrieval (the other problem being expertise profiling). [1]

Importance of expertise

It can be argued that human expertise [2] is more valuable than capital, means of production or intellectual property.[ citation needed] Contrary to expertise, all other aspects of capitalism are now relatively generic: access to capital is global, as is access to means of production for many areas of manufacturing. Intellectual property can be similarly licensed. Furthermore, expertise finding is also a key aspect of institutional memory, as without its experts an institution is effectively decapitated. However, finding and "licensing" expertise, the key to the effective use of these resources, remain much harder, starting with the very first step: finding expertise that you can trust.

Until very recently, finding expertise required a mix of individual, social and collaborative practices, a haphazard process at best. Mostly, it involved contacting individuals one trusts and asking them for referrals, while hoping that one's judgment about those individuals is justified and that their answers are thoughtful.

In the last fifteen years, a class of knowledge management software has emerged to facilitate and improve the quality of expertise finding, termed "expertise locating systems". These software range from social networking systems to knowledge bases. Some software, like those in the social networking realm, rely on users to connect each other, thus using social filtering to act as "recommender systems".

At the other end of the spectrum are specialized knowledge bases that rely on experts to populate a specialized type of database with their self-determined areas of expertise and contributions, and do not rely on user recommendations. Hybrids that feature expert-populated content in conjunction with user recommendations also exist, and are arguably more valuable for doing so.

Still other expertise knowledge bases rely strictly on external manifestations of expertise, herein termed "gated objects", e.g., citation impacts for scientific papers or data mining approaches wherein many of the work products of an expert are collated. Such systems are more likely to be free of user-introduced biases (e.g., ResearchScorecard ), though the use of computational methods can introduce other biases.

There are also hybrid approaches which use user-generated data (e.g., member profiles), community-based signals (e.g., recommendations and skill endorsements), and personalized signals (e.g., social connection between searcher and results).

Examples of the systems outlined above are listed in Table 1.

Table 1: A classification of expertise location systems

Type Application domain Data source Examples
Social networking Professional networking User-generated and community-generated
Scientific literature Identifying publications with strongest research impact Third-party generated
Scientific literature Expertise search Software
Knowledge base Private expertise database User-Generated
  • MITRE Expert Finder (MITRE Corporation)
  • MIT ExpertFinder (ref. 3)
  • Decisiv Search Matters & Expertise ( Recommind, Inc.)
  • ProFinda (ProFinda Ltd)
  • Skillhive (Intunex)
  • Tacit Software (Oracle Corporation)
  • GuruScan (GuruScan Social Expert Guide)
Knowledge base Publicly accessible expertise database User-generated
Knowledge base Private expertise database Third party-generated
  • MITRE Expert Finder (MITRE Corporation)
  • MIT ExpertFinder (ref. 3)
  • MindServer Expertise ( Recommind, Inc.)
  • Tacit Software
Knowledge base Publicly accessible expertise database Third party-generated
  • ResearchScorecard (ResearchScorecard Inc.)
  • authoratory.com
  • BiomedExperts (Collexis Holdings Inc.)
  • KnowledgeMesh (Hershey Center for Applied Research)
  • Community Academic Profiles (Stanford School of Medicine)
  • ResearchCrossroads.org (Innolyst, Inc.)
Blog search engines Third party-generated

Technical problems

A number of interesting problems follow from the use of expertise finding systems:

  • The matching of questions from non-expert to the database of existing expertise is inherently difficult, especially when the database does not store the requisite expertise. This problem grows even more acute with increasing ignorance on the part of the non-expert due to typical search problems involving use of keywords to search unstructured data that are not semantically normalized, as well as variability in how well an expert has set up their descriptive content pages. Improved question matching is one reason why third-party semantically normalized systems such as ResearchScorecard and BiomedExperts should be able to provide better answers to queries from non-expert users.
  • Avoiding expert-fatigue due to too many questions/requests from users of the system (ref. 1).
  • Finding ways to avoid "gaming" of the system to reap unjustified expertise credibility.
  • Infer expertise on implicit skills. Since users typically do not declare all of the skills they have, it is important to infer their implicit skills that are highly related their explicit ones. The inference step can significantly improve recall in expertise finding.

Expertise ranking

Means of classifying and ranking expertise (and therefore experts) become essential if the number of experts returned by a query is greater than a handful. This raises the following social problems associated with such systems:

  • How can expertise be assessed objectively? Is that even possible?
  • What are the consequences of relying on unstructured social assessments of expertise, such as user recommendations?
  • How does one distinguish authoritativeness as a proxy metric of expertise from simple popularity, which is often a function of one's ability to express oneself coupled with a good social sense?
  • What are the potential consequences of the social or professional stigma associated with the use of an authority ranking, such as used in Technorati and ResearchScorecard)?
  • How to make expertise ranking personalized to each individual searcher? This is particularly important for recruiting purpose since given the same skills, recruiters from different companies, industries, locations might have different preferences for candidates and their varying areas of expertise. [3]

Sources of data for assessing expertise

Many types of data sources have been used to infer expertise. They can be broadly categorized based on whether they measure "raw" contributions provided by the expert, or whether some sort of filter is applied to these contributions.

Unfiltered data sources that have been used to assess expertise, in no particular ranking order:

  • self-reported expertise on networking platforms
  • expertise sharing through platforms
  • user recommendations
  • help desk tickets: what the problem was and who fixed it
  • e-mail traffic between users
  • documents, whether private or on the web, particularly publications
  • user-maintained web pages
  • reports (technical, marketing, etc.)

Filtered data sources, that is, contributions that require approval by third parties (grant committees, referees, patent office, etc.) are particularly valuable for measuring expertise in a way that minimizes biases that follow from popularity or other social factors:

  • patents, particularly if issued
  • scientific publications
  • issued grants (failed grant proposals are rarely known beyond the authors)
  • clinical trials
  • product launches
  • pharmaceutical drugs

Approaches for creating expertise content

  • Manual, either by experts themselves (e.g., Skillhive) or by a curator (Expertise Finder)
  • Automated, e.g., using software agents (e.g., MIT's ExpertFinder) or a combination of agents and human curation (e.g., ResearchScorecard )
  • In industrial expertise search engines (e.g., LinkedIn), there are many signals coming into the ranking functions, such as, user-generated content (e.g., profiles), community-generated content (e.g., recommendations and skills endorsements) and personalized signals (e.g., social connections). Moreover, user queries might contain many other aspects rather required expertise, such as, locations, industries or companies. Thus, traditional information retrieval features like text matching are also important. Learning to rank is typically used to combine all of these signals together into a ranking function [3]

Collaborator discovery

In academia, a related problem is collaborator discovery, where the goal is to suggest suitable collaborators to a researcher. While expertise finding is an asynchronous problem (employer looking for employee), collaborator discovery can be distinguished from expertise finding by helping establishing more symmetric relationships (collaborations). Also, while in expertise finding the task often can be clearly characterized, this is not the case in academic research, where future goals are more fuzzy. [4]

References

  1. ^ Balog, Krisztian (2012). "Expertise Retrieval". Foundations and Trends in Information Retrieval. 6 (2–3): 127–256. doi: 10.1561/1500000024.
  2. ^ Njemanze, Ikenna (2016). "What Does Being a Strategic HR business Partner Look Like in Practice?". Archived from the original on June 21, 2018. Retrieved August 21, 2022.
  3. ^ a b c Ha-Thuc, Viet; Venkataraman, Ganesh; Rodriguez, Mario; Sinha, Shakti; Sundaram, Senthil; Guo, Lin (2015). "Personalized expertise search at Linked In". 2015 IEEE International Conference on Big Data (Big Data). pp. 1238–1247. arXiv: 1602.04572. doi: 10.1109/BigData.2015.7363878. ISBN  978-1-4799-9926-2. S2CID  12751245.
  4. ^ Schleyer, Titus; Butler, Brian S.; Song, Mei; Spallek, Heiko (2012). "Conceptualizing and advancing research networking systems". ACM Transactions on Computer-Human Interaction. 19 (1): 1–26. doi: 10.1145/2147783.2147785. PMC  3872832. PMID  24376309.

Further reading

  1. Ackerman, Mark and McDonald, David (1998) "Just Talk to Me: A Field Study of Expertise Location" Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work.
  2. Hughes, Gareth and Crowder, Richard (2003) "Experiences in designing highly adaptable expertise finder systems" Proceedings of the DETC Conference 2003.
  3. Maybury, M., D'Amore, R., House, D. (2002). "Awareness of organizational expertise." International Journal of Human-Computer Interaction 14(2): 199-217.
  4. Maybury, M., D'Amore, R., House, D. (2000). Automating Expert Finding. International Journal of Technology Research Management. 43(6): 12-15.
  5. Maybury, M., D'Amore, R, and House, D. December (2001). Expert Finding for Collaborative Virtual Environments. Communications of the ACM 14(12): 55-56. In Ragusa, J. and Bochenek, G. (eds). Special Section on Collaboration Virtual Design Environments.
  6. Maybury, M., D'Amore, R. and House, D. (2002). Automated Discovery and Mapping of Expertise. In Ackerman, M., Cohen, A., Pipek, V. and Wulf, V. (eds.). Beyond Knowledge Management: Sharing Expertise. Cambridge: MIT Press.
  7. Mattox, D., M. Maybury, et al. (1999). "Enterprise expert and knowledge discovery". Proceedings of the 8th International Conference on Human-Computer Interactions (HCI International 99), Munich, Germany.
  8. Tang, J., Zhang J., Yao L., Li J., Zhang L. and Su Z.(2008) "ArnetMiner: extraction and mining of academic social networks" Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining.
  9. Viavacqua, A. (1999). "Agents for expertise location". Proceedings of the 1999 AAAI Spring Symposium on Intelligent Agents in Cyberspace, Stanford, CA.

Videos

Youtube | Vimeo | Bing

Websites

Google | Yahoo | Bing

Encyclopedia

Google | Yahoo | Bing

Facebook