Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] ML Terminology in Clinical Prediction as well as the consequence of certain predictors being weighted the most important

I was recently reading this paper:

Predicting suicide attempts in adolescents with longitudinal clinical data and machine learning

First thing I wanted to discuss was the use of the term control. They had 3 “control groups” (OSI, Depressed, General) and they had a group of cases. In terms of model construction and validation they say:

A total of 1,470 adolescents with ICD codes for suicide and self-inflicted injury (i.e. E950–E959) were identified for model development and validation.

So to my understanding, they built the model using cases and OSI, then when testing discrimination, they mix in the cases with the control group of interest to see how well they can identify them? Or based on the general aim of their paper, are they trying to detect suicide attempts within the controls. I don’t necessarily see this because they don’t state how many true cases are within each group. I’m used to the use of controls in terms of measuring an effect of a treatment and less so in terms of prediction models (I’m more used to training vs testing datasets or people not even doing that).

Another confusing issue, was the fact that they used their control (OSI) as part of the model development and validation. Generally speaking, shouldn’t it be separate (i.e. you have training and testing). I was also confused by their Figures 4A-B in terms of “depressed control comparison” (4A) vs “general control comparison” (4B). I thought the only thing being compared is cases to controls.

Finally, I’m not sure how much of a concern this is, but I found it interesting (see charts) that BMI and Age were generally some of the best predictors. In a sense, while the algorithms don’t care and will just choose what best discriminates, I find it concerning that these kind of superficial predictors, which in some ways (more so BMI than age) have little to do with suicide attempts, are considered the most powerful. I am not sure if this kind of information implies anything about the generalizability of the model where one wouldn’t be able to find such systematic difference in terms of BMI and Age.

submitted by /u/slimuser98
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.