McGill Summer Research Bursary Program | Summer 2021
Faculty of Medicine and Health SciencesPublished online: Summer 2021
Error augmentation for upper limb rehabilitation in stroke patients
2Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada.
3Integrated Program in Neuroscience, McGill University, Montreal, QC, Canada.
4School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.
5Jewish Rehabilitation Hospital Research Site of the Center for Interdisciplinary Research in Rehabilitation, Laval, QC, Canada.
Corresponding Author: Alexander Kevorkov , email firstname.lastname@example.org
Background information: After stroke, patients are often left with impaired upper limb mobility. In order to recover motor function, motor learning principles are applied during stroke rehabilitation programs. Motor learning is a set of processes including practice, adaptation and experience that lead to permanent changes in motor capability. In recent years, virtual reality rehabilitation programs have gained traction in the rehabilitation field. They rely on implicit motor learning, which happens without the patients being explicitly aware that they are improving in a motor task. Error augmentation is a tool used in virtual reality (a computer-generated interactive environment) to enhance motor learning through enhanced feedback. Error augmentation induces an error in a given movement, which then necessitates correction. The correction and adaptation process increases the neuronal connections in the brain and strengthens motor pathways to increase recovery of motor functions. Purpose of the study: The purpose of the study is to evaluate the feasibility of using a virtual reality error augmentation program to help stroke patients recover their range of motion. The study will test and evaluate the program and see if it is adequate for stroke patients. Methods: Participants were required to complete a reaching task with their non-dominant hand. 15 markers were positioned on the participants' arm to track their movement using the Optotrak motion analysis system. Participants completed control reaches with a physical target located 30 centimeters away from their mid-sternum and reaches with the target being position contralaterally to their arm. In the first condition, the participants performed 15 free reaches and 15 assisted reaches using an ergonomic double-joint horizontal manipulandum followed by 15 free reaches in the second condition. They then underwent a reaching training program in a virtual environment with error augmentation for 3 days. Each training session lasted for 30 minutes, which resulted in approximately 150 reaches. After training, fatigue was assessed using the Borg CR10 scale. A retention test with physical targets and experiment feedback questionnaires were given on the last day, namely the NASA Task Load Index, the User Engagement Scale Long Form, and an Intrinsic Motivation Inventory. Results: The data collected with the Optotrak motion system accurately reflected the motion of the arm. We were able to gather all the necessary information to eventually perform a motion analysis. The participant's answers about the program across the feedback questionnaires were consistent and positive. The fatigue level never exceeded 1 on the Borg CR10 scale, enjoyment was rated high in the User Engagement Scale Long Form and Intrinsic Motivation Inventory while strain, pressure and irritability rated low on all the scales. Conclusion: The program is performing well in its objective of being suitable for stroke patients and being able to eventually track and analyse this population's movement. The task seems to be adequate for the use of stroke patient: it is not tiring, easy to achieve and motivating. Further tests must be done to adequately assess the potential of this virtual reality program, but these first results are an encouraging step to seeing it act as a rehabilitation program in the near future.
Anterograde transport of brain-derived neurotrophic factor from the thalamic parafascicular nucleus to the striatum in mice
Corresponding Author: Angela Yang , email email@example.com
Background information: Brain-derived neurotrophic factor (BDNF) is a protein that supports the survival and growth of neurons. Reduced levels of BDNF in the striatum, where cognitive locomotion is regulated, has been observed in various neurodegenerative pathologies. Interestingly, striatal neurons do not express BDNF mRNA, despite the presence of BDNF protein suggesting production and transport must occur elsewhere in the brain. Contrary to the traditional view of neurotrophins, where these proteins are target-derived and retrogradely delivered, recent studies have demonstrated that neurotrophins, including BDNF, can undergo anterograde trafficking from cortical, subcortical, and nigral afferents to the striatum. The thalamic parafascicular nucleus (PF) is of particular interest in BDNF trafficking due to its dense innervation to the striatum and its implication in Huntington’s disease. Objective: To explicitly delineate the anterograde transport of BDNF from the thalamic parafascicular nucleus (PF) to the striatum using biotinylated BDNF in mice. Methods: Biotinylated BDNF protein was intracranially injected into the PF of C57BL6J wildtype mice 5-6-weeks old (n=10). An exogenous source of BDNF was exploited to distinguish PF-derived BDNF in the striatum from other endogenous origins. Mice were sacrificed using transcranial perfusion at two different time points: 2 days post-injection (n=5) and 2 weeks post-injection (n=5). Brains were removed from the body and coronally sectioned at 40 microns, followed by avidin-biotin-(peroxidase) complex staining to visualize the presence of biotinylated BDNF. Brain sections were mounted onto slides and examined microscopically. Results: In brains extracted 2 days post-injection, biotinylated BDNF was clearly visualized at the PF injection site. Furthermore, biotinylated BDNF was also observed in the PF fiber tracts travelling both anteriorly and posteriorly, serving as a proof-of-concept for our surgical and staining protocols as well as the use of biotinylated BDNF to study neurotrophin transport. However, the protein was not found in the striatum, where we had expected to find anterogradely transported BDNF. In the PF of the 2-weeks post-injection group, biotinylated BDNF staining was much weaker than that of the 2-days post-injection group and was undetectable in the PF fiber tracts and the striatum. Conclusion: Taken together, our results demonstrate the effectiveness of biotinylated BDNF in studying neurotrophin transport in vivo. Optimization of the experimental endpoint may better reveal BDNF transport from the PF to the striatum and contribute to our understanding of dysfunctional neurotrophic signaling in neurodegenerative diseases.
Lipoprotein(a) in Atherosclerotic Cardiovascular Disease – Effect Modification by Low-Density Lipoprotein Cholesterol and Apolipoprotein B
Corresponding Author: Jenny Wang, email firstname.lastname@example.org
Background information: Lipoprotein(a) [Lp(a)], a low-density lipoprotein-like particle with an additional apolipoprotein(a), is a risk factor for cardiovascular diseases. Plasma levels of Lp(a) in an individual are primarily genetically determined in large part by variation in the LPA locus. An association between high plasma levels of Lp(a) and elevated risk of atherosclerotic cardiovascular disease-related events (ASCVD), such as myocardial infarction and coronary artery disease, is supported by observational and genetic evidence. The screening of high lipoprotein(a) in certain individuals with moderate elevations of other risk factors such as low-density lipoprotein cholesterol (LDL-C) and apolipoprotein B (apoB) may help identify individuals at high risk of ASCVD and could improve clinical care. Purpose of the study: This study aimed to i) confirm that Lp(a) is a reliable predictor of ASCVD and whether its effect is modified by LDL-C and apoB; and ii) identify the sub-groups of individuals with the highest levels of Lp(a) to evaluate the importance of the Lp(a) measurement as a predictor of ASCVD risk in patients with moderate elevations of LDL-C and/or apoB. Methods: The United Kingdom (UK) Biobank is a large-scale population database containing genetic and health information of over 500,000 participants aged 37 to 73 years, recruited between 2006 and 2010. For this study, we excluded participants on cholesterol medication at baseline. All analyses were performed on a subset of 250,189 White British participants aged 39 to 73 years with complete data for ASCVD risk factors and other covariates of interest (age, sex, BMI, smoking status, diabetes, systolic blood pressure, diabetes and blood pressure medication status, LDL-C levels, Lp(a) levels, apoB levels). Results: The mean (SD) age at enrollment was 56 (8) years and 42% were men. Throughout the mean (SD) follow-up time of 10.5 (2) years, 17,736 participants (7%) had an ASCVD-related event. The mean (SD) of LDL-C, apoB and Lp(a) were 144.34 mg/dL (31.03 mg/dL), 1.07 g/L (0.23 g/L) and 48.43 nmol/L (58.81 nmol/L) respectively. Higher concentrations of Lp(a) in individuals conferred higher relative and absolute ASCVD risks especially in the presence of moderate to high levels of LDL-C or apoB, with a 31% increase in event rates at 15 years. Regardless of high/low LDL-C/apoB, absolute risks for ASCVD in individuals with Lp(a) ≥ 100 nmol/L were increased compared to individuals with Lp(a) < 100 nmol/L. However, in the presence of high Lp(a) levels, the events were significantly higher in individuals with high LDL-C (9.21%) or apoB (9.41%). Conclusion: Lp(a) concentrations were associated with incident ASCVD, and this association was modified by LDL-C and apoB. A high absolute risk of incident ASCVD was conferred by the presence of both high Lp(a) and LDL-C or apoB, making this the sub-group most at risk. Our study emphasizes the importance of considering Lp(a) measurements in ASCVD risk stratification, especially those with moderate or higher levels of LDL-C and apoB.
Can machine learning improve cohorting and isolation of COVID patients in the ED?
Corresponding Author: Karim Atassi, email email@example.com
As of August 12th, 2021, Quebec had a total of 380,407 COVID-19 cases with 11,242 recorded deaths. Due to concerns that the available hospital resources were not sufficient to keep up with the number of patients requiring urgent care, emergency departments (ED) had to adapt their infrastructure to minimize the misallocation of resources and reduce any risk of transmission between patients and healthcare professionals. A method that the Jewish General Hospital adopted was to test patients presenting to the ED with a real-time reverse transcriptase polymerase chain reaction (RT-qPCR) molecular based assay which detects SARS-CoV-2 RNA, usually via nasopharyngeal swabs. However, these take 5-6 hours to yield results and so are not very effective at decongesting the ED. Therefore, healthcare providers resorted using to the World Health Organization (WHO) criteria to designate patients as moderate to high-risk “hot” or low-risk “cold” for COVID-19 infection. These criteria led to many patients being designated as “hot” who end up being COVID-19 negative. These patients utilize extensive ED resources in the form of precautions and isolation procedures. A screening tool accounting for all data available at triage rather than just the few questions in the WHO criteria might more accurately risk-stratify patients at the time of the “hot”/”cold” designation is made. This work aims to build a machine learning model that uses features collected at ED triage to provide healthcare staff with a predictive score specifying the risk that patients presenting to the ED have COVID-19. We aim to create a model that could be easily applied in the ED at the point of triage to reduce the number of patients unnecessarily designated as “hot” as compared to using the WHO criteria. We performed a retrospective analysis of 4112 patients who were at least 17 years of age and had received a COVID-19 test within 6 days after their visit at the JGH. Data was collected from Jan 28th – June 11th, 2020. Our data did not include patients who had already tested positive for COVID within the 10 days leading up to their visit. Our outcome of interest was COVID-19 infection as indicated by RT-qPCR assay (nasal swabs). The exposure variables used were those available at triage: presence of flu symptoms, age, destination after triage, hot/cold decision, triage score, recent travel, contact with individuals positive for COVID-19, systolic pressure, diastolic pressure, pulse, temperature, O2 saturation, supplemental O2, respiratory rate, capillary blood glucose monitoring, Glasgow coma scale and the PRISMA 7 score. We performed receiver-operating characteristic analysis to determine the discriminatory performance of the model in predicting COVID-19 infection as compared to the reference model using only the outcome of the WHO screening tool as a predictor. We also generated a calibration curve. Our results showed that our model had a higher AUC value of (0.8007, 95% CI: 0.7564–0.8453) than that of the reference model based on the staff decision (0.5257, 95% CI: 0.4933–0.5587). Feature importance identification indicated that the outcome of the first COVID test had the highest contribution in predicting whether or not the patient was infected (11.1%) followed by the hot/cold decision by the staff (9.79%). The model was not well calibrated. In conclusion, we have developed a COVID-19 predictor that outperforms the WHO screening tool in identifying patients at risk of COVID-19. However, the model is poorly calibrated and requires further development and validation before its potential use can be assessed.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.