Tiago and I have zoom calls on July 12th and July 14th. We decide to use the search engine which Red Hen has in-built. Since Edge 2 allows us to query using metadata, we decide to search by word/phrase containing chosen lexical prompts that denote ordering of elements such as “first”, “second”, and so on. The general idea is to use lexical prompts to obtain the gestures followed by using gestures to obtain the lexical prompts to understand the co-occurrence relations. However, we find that the video and it’s transcription (lexical context) are not aligned to a vast amount. Hence, we look for alternative means to prepare the dataset.