일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
- sizeof()
- decimal
- 프로그래밍
- Machine Learning
- 연산자
- #define
- #endif
- 단항연산자
- regression problem
- const
- 이코테
- Andrew Ng
- C++
- coursera
- 기계학습
- Runtime constants
- compile time constants
- 학습 알고리즘
- 코딩테스트
- standford University
- 형변환
- classification problem
- CLion
- Greedy
- 나동빈님
- algorithm
- 홍정모님
- 기계학습 기초
- 본즈앤올
- 코드블럭 오류
- Today
- Total
wellcome_공부일기
2. Model and Cost Function - Model Representation 본문
* 해당 글은 coursera의 Machine Learning by Andrew Ng 강의를 토대로 작성되었습니다.
2. Model and Cost Function - Model Representation
<목차>
1. 모델 구성(Model Representation) 살펴보기
- 주택 가격 예측
2. 데이터셋/학습셋의 표기법(dataset/training set notation)
3. 훈련집합을 통한 지도학습 알고리즘 과정
4. Summary
1. 모델 구성(Model Representation) 살펴보기
- Portland의 주택 가격 데이터로 집세를 예측
2. 데이터셋/학습셋의 표기법(dataset/training set notation)
3. 훈련집합을 통한 지도학습 알고리즘 과정
주택 가격과 같은 훈련집합을 통해 배우게 될 것은, 이 지도 학습 알고리즘이 어떻게 돌아가느냐에 대한 것 입니다.
Model Representation
To establish notation for future use, we’ll use x^{(i)} to denote the “input” variables (living area in this example), also called input features, and y^{(i)} to denote the “output” or target variable that we are trying to predict (price). A pair (x^{(i)} , y^{(i)} ) is called a training example, and the dataset that we’ll be using to learn—a list of m training examples {(x^{(i)} , y^{(i)} ); i = 1, . . . , m}—is called a training set. Note that the superscript “(i)” in the notation is simply an index into the training set, and has nothing to do with exponentiation. We will also use X to denote the space of input values, and Y to denote the space of output values. In this example, X = Y = ℝ.
To describe the supervised learning problem slightly more formally, our goal is, given a training set, to learn a function h : X → Y so that h(x) is a “good” predictor for the corresponding value of y. For historical reasons, this function h is called a hypothesis. Seen pictorially, the process is therefore like this:
When the target variable that we’re trying to predict is continuous, such as in our housing example, we call the learning problem a regression problem. When y can take on only a small number of discrete values (such as if, given the living area, we wanted to predict if a dwelling is a house or an apartment, say), we call it a classification problem.
'컴퓨터 과학 > 머신러닝' 카테고리의 다른 글
2. Model and Cost Function- Cost Function (0) | 2020.06.11 |
---|---|
1. Introduction- Unsupervised Learning (0) | 2020.06.04 |
1. Introduction - Supervised learning (0) | 2020.06.04 |
1. Introduction - What is Machine Learning? (0) | 2020.06.03 |
1. Introduction- Welcome (0) | 2020.06.03 |