Pretrained Transformers for Text Ranking

Pretrained Transformers for Text Ranking PDF

Author: Jimmy Lin

Publisher: Springer Nature

Published: 2022-06-01

Total Pages: 307

ISBN-13: 3031021819

DOWNLOAD EBOOK →

The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications.This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.

Transformers for Natural Language Processing

Transformers for Natural Language Processing PDF

Author: Denis Rothman

Publisher: Packt Publishing Ltd

Published: 2021-01-29

Total Pages: 385

ISBN-13: 1800568630

DOWNLOAD EBOOK →

Publisher's Note: A new edition of this book is out now that includes working with GPT-3 and comparing the results with other models. It includes even more use cases, such as casual language analysis and computer vision tasks, as well as an introduction to OpenAI's Codex. Key FeaturesBuild and implement state-of-the-art language models, such as the original Transformer, BERT, T5, and GPT-2, using concepts that outperform classical deep learning modelsGo through hands-on applications in Python using Google Colaboratory Notebooks with nothing to install on a local machineTest transformer models on advanced use casesBook Description The transformer architecture has proved to be revolutionary in outperforming the classical RNN and CNN models in use today. With an apply-as-you-learn approach, Transformers for Natural Language Processing investigates in vast detail the deep learning for machine translations, speech-to-text, text-to-speech, language modeling, question answering, and many more NLP domains with transformers. The book takes you through NLP with Python and examines various eminent models and datasets within the transformer architecture created by pioneers such as Google, Facebook, Microsoft, OpenAI, and Hugging Face. The book trains you in three stages. The first stage introduces you to transformer architectures, starting with the original transformer, before moving on to RoBERTa, BERT, and DistilBERT models. You will discover training methods for smaller transformers that can outperform GPT-3 in some cases. In the second stage, you will apply transformers for Natural Language Understanding (NLU) and Natural Language Generation (NLG). Finally, the third stage will help you grasp advanced language understanding techniques such as optimizing social network datasets and fake news identification. By the end of this NLP book, you will understand transformers from a cognitive science perspective and be proficient in applying pretrained transformer models by tech giants to various datasets. What you will learnUse the latest pretrained transformer modelsGrasp the workings of the original Transformer, GPT-2, BERT, T5, and other transformer modelsCreate language understanding Python programs using concepts that outperform classical deep learning modelsUse a variety of NLP platforms, including Hugging Face, Trax, and AllenNLPApply Python, TensorFlow, and Keras programs to sentiment analysis, text summarization, speech recognition, machine translations, and moreMeasure the productivity of key transformers to define their scope, potential, and limits in productionWho this book is for Since the book does not teach basic programming, you must be familiar with neural networks, Python, PyTorch, and TensorFlow in order to learn their implementation with Transformers. Readers who can benefit the most from this book include experienced deep learning & NLP practitioners and data analysts & data scientists who want to process the increasing amounts of language-driven data.

Natural Language Processing with Transformers, Revised Edition

Natural Language Processing with Transformers, Revised Edition PDF

Author: Lewis Tunstall

Publisher: "O'Reilly Media, Inc."

Published: 2022-05-26

Total Pages: 409

ISBN-13: 1098136764

DOWNLOAD EBOOK →

Since their introduction in 2017, transformers have quickly become the dominant architecture for achieving state-of-the-art results on a variety of natural language processing tasks. If you're a data scientist or coder, this practical book -now revised in full color- shows you how to train and scale these large models using Hugging Face Transformers, a Python-based deep learning library. Transformers have been used to write realistic news stories, improve Google Search queries, and even create chatbots that tell corny jokes. In this guide, authors Lewis Tunstall, Leandro von Werra, and Thomas Wolf, among the creators of Hugging Face Transformers, use a hands-on approach to teach you how transformers work and how to integrate them in your applications. You'll quickly learn a variety of tasks they can help you solve. Build, debug, and optimize transformer models for core NLP tasks, such as text classification, named entity recognition, and question answering Learn how transformers can be used for cross-lingual transfer learning Apply transformers in real-world scenarios where labeled data is scarce Make transformer models efficient for deployment using techniques such as distillation, pruning, and quantization Train transformers from scratch and learn how to scale to multiple GPUs and distributed environments

Advances in Information Retrieval

Advances in Information Retrieval PDF

Author: Djoerd Hiemstra

Publisher: Springer Nature

Published: 2021-03-26

Total Pages: 808

ISBN-13: 3030721132

DOWNLOAD EBOOK →

This two-volume set LNCS 12656 and 12657 constitutes the refereed proceedings of the 43rd European Conference on IR Research, ECIR 2021, held virtually in March/April 2021, due to the COVID-19 pandemic. The 50 full papers presented together with 11 reproducibility papers, 39 short papers, 15 demonstration papers, 12 CLEF lab descriptions papers, 5 doctoral consortium papers, 5 workshop abstracts, and 8 tutorials abstracts were carefully reviewed and selected from 436 submissions. The accepted contributions cover the state of the art in IR: deep learning-based information retrieval techniques, use of entities and knowledge graphs, recommender systems, retrieval methods, information extraction, question answering, topic and prediction models, multimedia retrieval, and much more.

The Probabilistic Relevance Framework

The Probabilistic Relevance Framework PDF

Author: Stephen Robertson

Publisher: Now Publishers Inc

Published: 2009

Total Pages: 69

ISBN-13: 1601983085

DOWNLOAD EBOOK →

The Probabilistic Relevance Framework (PRF) is a formal framework for document retrieval, grounded in work done in the 1970-80s, which led to the development of one of the most successful text-retrieval algorithms, BM25. In recent years, research in the PRF has yielded new retrieval models capable of taking into account structure and link-graph information. Again, this has led to one of the most successful web-search and corporate-search algorithms, BM25F. The Probabilistic Relevance Framework: BM25 and Beyond presents the PRF from a conceptual point of view, describing the probabilistic modelling assumptions behind the framework and the different ranking algorithms that result from its application: the binary independence model, relevance feedback models, BM25, BM25F. Besides presenting a full derivation of the PRF ranking algorithms, it provides many insights about document retrieval in general, and points to many open challenges in this area. It also discusses the relation between the PRF and other statistical models for IR, and covers some related topics, such as the use of non-textual features, and parameter optimization for models with free parameters. The Probabilistic Relevance Framework: BM25 and Beyond is self-contained and accessible to anyone with basic knowledge of probability and inference

Advances in Information Retrieval

Advances in Information Retrieval PDF

Author: Jaap Kamps

Publisher: Springer Nature

Published: 2023-03-16

Total Pages: 781

ISBN-13: 3031282442

DOWNLOAD EBOOK →

The three-volume set LNCS 13980, 13981 and 13982 constitutes the refereed proceedings of the 45th European Conference on IR Research, ECIR 2023, held in Dublin, Ireland, during April 2-6, 2023. The 65 full papers, 41 short papers, 19 demonstration papers, and 12 reproducibility papers, 10 doctoral consortium papers were carefully reviewed and selected from 489 submissions. The accepted papers cover the state of the art in information retrieval focusing on user aspects, system and foundational aspects, machine learning, applications, evaluation, new social and technical challenges, and other topics of direct or indirect relevance to search.

Practical Weak Supervision

Practical Weak Supervision PDF

Author: Wee Hyong Tok

Publisher: "O'Reilly Media, Inc."

Published: 2021-09-30

Total Pages: 193

ISBN-13: 1492077038

DOWNLOAD EBOOK →

Most data scientists and engineers today rely on quality labeled data to train machine learning models. But building a training set manually is time-consuming and expensive, leaving many companies with unfinished ML projects. There's a more practical approach. In this book, Wee Hyong Tok, Amit Bahree, and Senja Filipi show you how to create products using weakly supervised learning models. You'll learn how to build natural language processing and computer vision projects using weakly labeled datasets from Snorkel, a spin-off from the Stanford AI Lab. Because so many companies have pursued ML projects that never go beyond their labs, this book also provides a guide on how to ship the deep learning models you build. Get up to speed on the field of weak supervision, including ways to use it as part of the data science process Use Snorkel AI for weak supervision and data programming Get code examples for using Snorkel to label text and image datasets Use a weakly labeled dataset for text and image classification Learn practical considerations for using Snorkel with large datasets and using Spark clusters to scale labeling

Foundation Models for Natural Language Processing

Foundation Models for Natural Language Processing PDF

Author: Gerhard Paaß

Publisher: Springer Nature

Published: 2023-05-23

Total Pages: 448

ISBN-13: 3031231902

DOWNLOAD EBOOK →

This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.

Experimental IR Meets Multilinguality, Multimodality, and Interaction

Experimental IR Meets Multilinguality, Multimodality, and Interaction PDF

Author: Alberto Barrón-Cedeño

Publisher: Springer Nature

Published: 2022-08-24

Total Pages: 582

ISBN-13: 3031136438

DOWNLOAD EBOOK →

This book constitutes the refereed proceedings of the 13th International Conference of the CLEF Association, CLEF 2022, held in Bologna, Italy in September 2022. The conference has a clear focus on experimental information retrieval with special attention to the challenges of multimodality, multilinguality, and interactive search ranging from unstructured to semi structures and structured data. The 7 full papers presented together with 3 short papers in this volume were carefully reviewed and selected from 14 submissions. This year, the contributions addressed the following challenges: authorship attribution, fake news detection and news tracking, noise-detection in automatically transferred relevance judgments, impact of online education on children’s conversational search behavior, analysis of multi-modal social media content, knowledge graphs for sensitivity identification, a fusion of deep learning and logic rules for sentiment analysis, medical concept normalization and domain-specific information extraction. In addition to this, the volume presents 7 “best of the labs” papers which were reviewed as full paper submissions with the same review criteria. 14 lab overview papers were accepted and represent scientific challenges based on new datasets and real world problems in multimodal and multilingual information access.

Advances in Information Retrieval

Advances in Information Retrieval PDF

Author: Djoerd Hiemstra

Publisher: Springer Nature

Published: 2021-03-29

Total Pages: 760

ISBN-13: 3030722406

DOWNLOAD EBOOK →

This two-volume set LNCS 12656 and 12657 constitutes the refereed proceedings of the 43rd European Conference on IR Research, ECIR 2021, held virtually in March/April 2021, due to the COVID-19 pandemic. The 50 full papers presented together with 11 reproducibility papers, 39 short papers, 15 demonstration papers, 12 CLEF lab descriptions papers, 5 doctoral consortium papers, 5 workshop abstracts, and 8 tutorials abstracts were carefully reviewed and selected from 436 submissions. The accepted contributions cover the state of the art in IR: deep learning-based information retrieval techniques, use of entities and knowledge graphs, recommender systems, retrieval methods, information extraction, question answering, topic and prediction models, multimedia retrieval, and much more.