Below are few major features of Nlp
Q1. What is NLP?
NLP stands for Natural Language Processing. This NLP can be defined as the manipulation of the natural language such as speech, text. This manipulation is automatic and is done by software. Studies for natural language processing or in short for NLP has been going on for over 50 years in a row now. And it is also now separated from the linguistics field due to the emergence of computers.
Q2. What are the applications of NLP?
There are various applications of Natural Language Processing (NLP) in the computer world. NLP uses artificial intelligence, computer science, and computational linguistics to allow the machines to read the text.
The applications of NLP are as under:
Q3. How does NLP work?
Q4. List some areas of NLP?
Q5. What's NLP's role in social media?
Q6. What is morphology in NLP?
Morphology in NLP is defined as the study of the structure of words and how the words are formed. It identifies the root of the word and the prefix and suffix which are attached to the root of the word. For example, take a word "unhappiness", here if we see the formation of the word then we will come to know that prefix is "un", the root is "happy", a suffix is "ness". This study of word formation and identification of the structure of a word is known as Morphology.
Q7. Why is NLP hard?
Q8. What is the difference between NLP and NLU?
NLU, expanded as Natural Language Understanding is a component in NLP that is used to understand the meaning of natural language. That is, it finds whether the language is in spoken form or in text form. It uses a POS tagger, parsers to find the meaning of the language and to build applications. It can be defined as the process of reading and interpreting the language. NLG, expanded as Natural Language Generation is a component in
NLP is used to generate natural language. It uses POS tags, parsing results, and many others to generate the natural language using the machines. It can be defined as the process of writing or generating a language.
Q9. What is Stemming in NLP?
Stemming in NLP is the process of reducing a word of any sentence to its word stem or to the root of the word which is also known as a lemma. Stemming is very crucial in NLP (natural language processing) as well as in NLU (natural language understanding). By understanding the form of the word makes it possible to search for more related results that have been missed. This additional information is the result of the stemming process; that is why this process is considered very crucial in NLP. Stemming can be performed by an individual or an algorithm; once the root word is found, it explores new results related to that content.
Q10. What is FSA recognition?
A model out of many others for a computer that is abstract and does the following is known as Finite State Automaton:
The FSA can also accept or reject an input string. All the automation has a language which is the collection of strings that it would accept.
Q11. What are some open-source NLP libraries?
Q12. Why is NLP relevant?
Q13. What is pragmatic analysis in NLP?
Pragmatic analysis in NLP (Natural Language Processing) is then defined as the process of extracting information from any given text. There is various text whose meaning does not contradict with the reference in which they are written. In that case, there is a need to extract useful information from that text and Pragmatic analysis is a part of that information extracting process. It takes a structured set of any given text and finds out what the actual meaning was. It is a crucial process in which we obtain useful information and exact meaning which any particular text wants to convey.
Q14. List some Components of NLP?
There are five main components of NLP (Natural Language processing); these are
Q15. Enlist major components on NLP?
Natural Language Processing has five main components. They are,
Here, the sentences are analyzed to identify entities in it such as a person, place, organization, events, etc. The identified entities are clustered by their group, and an importance factor is assigned to it.
Here, the sentence is parsed to identify the relation between the words. Also, the grammar in the sentence is understood in this analysis.
After the sentence is analyzed to find relation and extract entity, the semantic analysis is performed to find the meaning of the sentence in a context free form.
This analysis is done to find a mood or attitude in the sentence. The sentence is analyzed to find polarity that is it finds whether the sentence is positive or negative. Magnitude is also calculated that assigns the weight for the polarity in the sentence.
Here the statement is analyzed based on the context using the preceding or succeeding sentences.
Q16. Enlist some real world applications of NLP?
There are many real-world applications of using NLP. Some of them are,
Gmail is filtering out the spam mail from the real ones using the NLP.
Google Assistant, Amazon Alexa, and many other famous digital assistants are developed using the NLP.
Autocorrect in the smartphone keyboard, and autocomplete in google search are all using the NLP.
It is the main area where the NLP is used. Translation software's like Google translate, Microsoft translate uses the NLP heavily.
All the search engines use NLP to analyze the user-entered text and query exact results.
Q17. Explain NLP Terminology?
Some of the common NLP terminologies are,
Q18. What is Lemmatization in NLP?
Lemmatization the process of converting a word into its base form. In NLP, lemmatization considers the context and converts the word into a meaningful base form. The converted word is called as lemmas. To get the correct lemma of a word, it is important to study the morphological analysis of each word and it requires dictionaries to do it. There are various libraries to do lemmatization such as wordnet in the NLTK, spaCY lemmatization, etc.
For eg: ‘caring’ is lemmatized into ‘care’.
Q19. Explain Latent Semantic Indexing in NLP?
LSA (Latent Semantic Analysis) or sometimes referred to as LSI (Latent Semantic Indexing) is the process of analyzing the relationship between documents and the words they contain by converting the document into a vector form. In the vector form, it is easy to find the relationship between the words by calculating the distance between them. The first step in LSA is to convert the terms in the document to its vector form by using a term frequency-inverse document frequency algorithm. Then, LSA uses the SVD (Singular Value Decomposition) technique to reduce the dimensionality of the vectors. Finally, a matrix is created containing rows with unique words and columns with documents to find the relationship between the documents.
Q20. What is text mining in NLP?
Text mining is the process of analyzing the unstructured textual data to gather valuable information from it. It incorporates various processes such as data extraction, machine learning, and statistics to find useful information from textual data. Here, the unstructured data is gathered first. Then, it is converted into a structured form by using machine learning algorithms. Then finally, useful information is gathered from it using statistics and text mining algorithms. This process is used in various places like social media analysis, customer care service, fraud detection, etc.
Q21. What is word embedding?
Word embedding in NLP is a modeling technique that maps the words from phrases into a vector. This process is done to improve the accuracy in sentiment analysis and syntactic parsing. There are many algorithms to convert a word into a real number such as GloVe, Word2Vec, Embedding layer, etc. Word embedding technique can’t represent the same words with multiple meanings as different vectors. That is, it conflates homonym words as a sing vector.
Q22. What is Latent Dirichlet Allocation in NLP?
LDA (Latent Dirichlet Allocation) is a statistical technique to represent words and sentences in a document as a topic with a certain probability. The documents are represented as a mixture of topics with words having a certain probability. In this algorithm, it randomly assigns the set of predefined topics to the words in the document. Then it learns over time to find the words matching a certain topic with good probability. It uses Natural Language Processing and Topic Modelling to find the topic for each sentence with a certain probability.
Q23. Explain difference between Lemmatizing and Stemming?
Some Difference between Lemmatizing Stemming
Stemming – It reduces the word by cutting off the beginning or end of the word. It usually achieves goals most of the time, but not all the time as it doesn’t use vocabulary or morphological analysis.
Eg: Studies -> Studi //here it doesn't reduce the word into correct base form.
Studying -> study //here it reduces into the correct base form of the word.
Lemmatization – It is same as the stemming but it takes morphological analysis and vocabulary into consideration. That is, it always converts the word into a correct base form.
Eg: Studies -> Study & Studying -> Study
Q24. What are distance-based classifiers?
Distance-based classification classified the objects based on the similarity or dissimilarity between them measured by the distance functions. In NLP, K nearest algorithm is used for text classification. It is a simple supervised algorithm that is used to group or classify data objects based on the distance between them.
It uses the Euclidean distance algorithm to calculate the distance between the objects.
Q25. What is TF_IDF?
TFIDF (Term Frequency-Inverse Document Frequency) is a statistical method to find how important a word is to a document. It assigns weight to words based on the number of times the word is repeated in the document. The weight is offset by the number of documents that contain the word. It multiplies the term frequency (frequency of the word in a document) and the inverse document frequency (how rare/common a word across a set of documents) to calculate TF-IDF. Using TF-IDF for valuing the text is used in automated text analysis and for scoring the text in machine learning algorithms for NLP.
Q26. Describe dependency parsing in NLP?
Dependency parsing also called Syntactic parsing is used to assign a syntactic structure to a sentence. It assigns parse tree syntactic structure to the sentence. It is useful for checking the grammar and semantic analysis of sentences in the NLP. A sentence can have multiple parse trees because of its ambiguity. So, it makes the dependency parsing a complex task.
There are many libraries that provide dependency parsing like spaCY dependency parser, NLTK dependency parser, etc.
Q27. How to build ontologies?
There are several libraries in python to build an ontology. The OWL API Python library offers excellent support to build ontology from the text. FRED is another machine reader tool for Semantic wen that is used to build and design ontology from the word.
Q28. Enlist few tools for training NLP models?
Some of the popular tools for training the NLP models are,
Q29. What is a POS tagger?
POS (Parts of Speech) tagger is a software to categorize the text according to the part of the speech that is based on the context and the definition of the word. The tagger reads the word and assigns parts of speech such as nouns, verbs, adjectives, etc to the word. The POS tagging also termed as grammatical tagging or word-category disambiguation is done as a process in text analysis to find the hidden meaning of the text.
The algorithms used by the POS tagger fall into two types:- rule-based tagging and stochastic.
Q30. What is shallow parsing?
Shallow parsing is done to analyze the sentence to find its parts of speech such as nouns, verbs, adjectives, etc.
After finding the Parts of Speech, it then links groups it together to find the grammatical meaning of the text. It is similar to POS tagging, but it takes the POS tagging one step further to find verb groups, noun groups, etc. Shallow parsing is used in Natural Language Processing heavily.
Q31. What is NLTK?
NLTK, expanded as Natural Language ToolKit is an open-source python library used for Natural Language Processing. Releases in 2001 by Steven Bird and Edward Loper, the NLTK is the most popular NLP package that supports a wide variety of algorithms and statistic methods to perform text analysis. It also has sample data to work with. NLTK is mainly used in research and teaching domains.
NLTK has support for text classification, stemming, tagging, and parsing.
Q32. What is Bert?
BERT (Bidirectional Encoder Representations for Transformer) is an open-source NLP model developed by researchers at Google. It uses bidirectional training to learn about the text. After training the model with billions of sentences, the BERT has a good understanding of how sentences work.
BERT also makes use of the Transformer (an attention mechanism) to learn about the contextual relations between the text. BERT takes Natural Language Processing to the next level and it created a big stir in the machine learning community.