Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] 2nd Order Approximation in XGboost’s Objective Function

Hi all,

I have a quick question regarding XGboost’s objective function. I was reading the XGboost paper (https://arxiv.org/pdf/1603.02754.pdf). I see that authors approximated the original objective function using a 2nd order Taylor series (page 2, section 2.2). Is there a particular reason why it’s expanded to 2nd degree and not higher? I’m guessing that linear apprx. is not enough and higher orders require more computational power, but is there a mathematical background or is this a design choice?

submitted by /u/_kty
[link] [comments]