Skip to content

For SRL the task is to learn and predict the predicate – arguments structure of a sentence. My solution is a deep learning model which uses contextualized word embeddings and bi-affine attention mechanism.

Notifications You must be signed in to change notification settings

EdoGiordani99/Semantic-Role-Labeling

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Semantic Role Labeling using Contextualized Word Embeddings and Attention Layers

Semantic Role Labeling is a taks that consists in assigning to each word in an input sentence labels that indicates their semantic role (such as agent, goal, or result)*.

When we read a sentence, we are able to identify subject, object and other arguments mainly by looking to the predicate. In the example “The cat eats the mouse”, “the cat” is the agent while “the mouse” is the patient. It is therefore sufficient to change the verbal form of the predicate to make the roles reverse. In the sentence "The mouse is eaten by the cat”, even if the meaning is the same, roles of the two arguments are reversed. Predicates play one of the most important roles in the SRL task. We can picture the prediction process as a pipeline of four steps:

In this work I will only focus on the last 2 tasks.

Dataset

The dataset I used to train my model is the UniteD-SRL (Tripodi et al., EMNLP 2021). This dataset is a private one and was provided by the Sapienza NLP Group. The dataset is a JSON file where each entry(each sample) is a dictionary containing the following fields:

sentence_id: {
    “words”: [“The”, “cat”, “ate”, “the”, “mouse”, “.”],
    “lemmas”: [“the”, “cat”, “eat”, “the”, “mouse", “.”],
    “pos_tags”: [“DET”, ..., “PUNCT”],
    “dependency_relations”: [“NMOD”, ..., “ROOT”, ..., “P”],
    “dependency_heads”: [1, 2, 0, ...],
    “predicates”: [“_”, “_”, “EAT_BITE”, “_”, “_”, “_”],
    “roles”: {
        “2”: [“_”, “Agent”, “_”, “_”, “Patient”, “_”],
    }
}

Training

Training can be simply done by using the provided Google Colab notebook train-notebook.ipynb (possibly using GPU). To train the model, simply follow these steps:

  1. First of all, clone this repository
!git clone https://github.com/EdoGiordani99/Semantic-Role_Labeling.git
  1. Set correctly the config.py file. In this file you should:
    • MODEL_NAME: the name for saving the model
    • LANGUAGE_MODEL_NAME: model of the pretrained bert model

About

For SRL the task is to learn and predict the predicate – arguments structure of a sentence. My solution is a deep learning model which uses contextualized word embeddings and bi-affine attention mechanism.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages