Experimental meta-data (EM) reporting is a fundamental field for reproducing and understanding biomedical experiments and results. Experimental Metadata Reporting Checklist Questions (EMR-CLQs) have been designed to capture EM and evaluate the quality of reporting. Automatically answering EMR-CLQs is necessary to check completeness and clarity of EM, and this can be used in the peer-review process. Extracting the EMR-CLQs answers automatically can be used to search the relevant literature for the meta-data analysis process in an efficient way. However, automatically answering the questions is challenging. For example, identifying one species as the answer from many mentions of species requires an automatic understanding of the context the species are mentioned in. This thesis aims to explore the possibility of answering different types of EMR-CLQs automatically by understanding the structure of both EMR-CLQs and the bio-medical article. A text mining workflow (rule-based approach) was proposed to automate the answering process. 5 EMR-CLQs divided into two types Main and Attribute were answered automatically from 58 parasitology articles. The feasibility of the proposed workflow was evaluated against gold-standard annotations by domain experts. The workflow results show the possibility of answering the EMR-CLQs automatically with a mean precision and recall of 70% and 80% respectively.