サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
Wikipedia
christophm.github.io
A prediction can be explained by assuming that each feature value of the instance is a “player” in a game where the prediction is the payout. Shapley values – a method from coalitional game theory – tells us how to fairly distribute the “payout” among the features. 9.5.1 General Idea Assume the following scenario: You have trained a machine learning model to predict apartment prices. For a certain
Interpretable Machine Learning A Guide for Making Black Box Models Explainable Christoph Molnar 2023-08-21 Summary Machine learning has great potential for improving products, processes and research. But computers usually do not explain their predictions which is a barrier to the adoption of machine learning. This book is about making machine learning models and their decisions interpretable. Afte
このページを最初にブックマークしてみませんか?
『christophm.github.io』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く