Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
211 changes: 134 additions & 77 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,121 +1,178 @@
![](UTA-DataScience-Logo.png)

# Project Title
# House Price Classification using Random Forest on Kaggle Tabular Data

* **One Sentence Summary** Ex: This repository holds an attempt to apply LSTMs to Stock Market using data from
"Get Rich" Kaggle challenge (provide link).
This project applies a Random Forest model to the Kaggle House Prices dataset by turning sale prices into classes, achieving 0.32 RMSE and 0.88 R^2 on training and 0.51 RMSE and 0.68 R² on validation, with feature scaling reducing GrLivArea:2500 to 0.8 and GarageCars:2 to 0.3.

## Overview
The task is based on the Kaggle House Prices dataset, where the goal is to predict housing prices using structured tabular features such as square footage, number of rooms, and property characteristics.

* This section could contain a short paragraph which include the following:
* **Definition of the tasks / challenge** Ex: The task, as defined by the Kaggle challenge is to use a time series of 12 features, sampled daily for 1 month, to predict the next day's price of a stock.
* **Your approach** Ex: The approach in this repository formulates the problem as regression task, using deep recurrent neural networks as the model with the full time series of features as input. We compared the performance of 3 different network architectures.
* **Summary of the performance achieved** Ex: Our best model was able to predict the next day stock price within 23%, 90% of the time. At the time of writing, the best performance on Kaggle of this metric is 18%.
In this project, the regression problem was reformulated into a classification task by grouping house prices into three categories:

## Summary of Workdone
Low (< $150,000)
Medium ($150,000–$300,000)
High (> $300,000)

Include only the sections that are relevant an appropriate.
The approach includes:
Data cleaning and preprocessing,
Feature engineering and transformation,
Training a Random Forest model,
Evaluating performance using RMSE and R^2.
The model achieved reasonable predictive performance on the validation set, demonstrating that structured features can effectively capture housing price patterns.

### Data
<img width="552" height="433" alt="image" src="https://github.com/user-attachments/assets/10604579-fa76-45a2-a512-961edc2b6187" />

* Data:
* Type: For example
* Input: medical images (1000x1000 pixel jpegs), CSV file: image filename -> diagnosis
* Input: CSV file of features, output: signal/background flag in 1st column.
* Size: How much data?
* Instances (Train, Test, Validation Split): how many data points? Ex: 1000 patients for training, 200 for testing, none for validation
### Summary of Work Done
Data
Type:
Input - CSV file with housing features,
Output - Categorical price class,
Dataset - Kaggle House Prices dataset

#### Preprocessing / Clean up
Size:
Approximately 1460 training samples,
Approximately 80 features

* Describe any manipulations you performed to the data.
Split:
70% Training,
15% Validation,
15% Test

#### Data Visualization
### Preprocessing / Cleanup
Converted SalePrice into 3 classes:
0 to Low,
1 to Medium,
2 to High

Show a few visualization of the data and say a few words about what you see.
Handled missing values:
Numerical to median imputation,
Categorical to the most frequent value

### Problem Formulation
Feature scaling:
StandardScaler applied to numerical features

* Define:
* Input / Output
* Models
* Describe the different models you tried and why.
* Loss, Optimizer, other Hyperparameters.
Encoding:
One-hot encoding for categorical variables

### Training
Removed unnecessary columns:
ID column dropped

### Data Visualization
Histogram of GrLivArea across price classes showed:
Larger homes tend to fall into higher price classes,
Before/after scaling plots confirmed normalization worked

* Describe the training:
* How you trained: software and hardware.
* How did training take.
* Training curves (loss vs epoch for test/train).
* How did you decide to stop training.
* Any difficulties? How did you resolve them?
Key insight:
The GrLivArea is strongly correlated with the price category

### Performance Comparison
### Problem Formulation
Input: Housing features

* Clearly define the key performance metric(s).
* Show/compare results in one table.
* Show one (or few) visualization(s) of results, for example ROC curves.
Output: Price class

### Conclusions
Model Used:
Random Forest Regressor

* State any conclusions you can infer from your work. Example: LSTM work better than GRU.
Why Random Forest:
Handles tabular data well,
Works with nonlinear relationships,
Robust to overfitting compared to single trees,

### Future Work
Metrics:
RMSE,
R^2 Score

* What would be the next thing that you would try.
* What are some other studies that can be done starting from here.
### Training
Library: scikit-learn
Environment: Jupyter Notebook

## How to reproduce results
Training steps:
Train/validation/test split,
Model fit on training data,
Evaluation on the validation set

* In this section, provide instructions at least one of the following:
* Reproduce your results fully, including training.
* Apply this package to other data. For example, how to use the model you trained.
* Use this package to perform their own study.
* Also describe what resources to use for this package, if appropirate. For example, point them to Collab and TPUs.
Stopping Criteria:
Default Random Forest parameters

### Overview of files in repository
## Performance Evaluation
To evaluate the model's effectiveness, we used two primary metrics: RMSE and R^2 Score. These metrics provide insight into how well the model predicts the housing price categories

* Describe the directory structure, if any.
* List all relavent files and describe their role in the package.
* An example:
* utils.py: various functions that are used in cleaning and visualizing data.
* preprocess.ipynb: Takes input data in CSV and writes out data frame after cleanup.
* visualization.ipynb: Creates various visualizations of the data.
* models.py: Contains functions that build the various models.
* training-model-1.ipynb: Trains the first model and saves model during training.
* training-model-2.ipynb: Trains the second model and saves model during training.
* training-model-3.ipynb: Trains the third model and saves model during training.
* performance.ipynb: loads multiple trained models and compares results.
* inference.ipynb: loads a trained model and applies it to test data to create kaggle submission.
### Conclusions
Random Forest performed well on tabular housing data, and the feature preprocessing was essential. Converting the regression to classification simplified the problem.

* Note that all of these notebooks should contain enough text for someone to understand what is happening.
### Future Work
Try true classification models,
Hyperparameter tuning,
Feature selection to reduce dimensionality,
Use original regression instead of binning prices,
Try neural networks on tabular data

### How to Reproduce Results
Download the dataset from Kaggle,
Place train.csv and test.csv in the project directory

Run notebook:
Data preprocessing,
Model training,
Prediction generation

Output:
submission.csv for Kaggle

### Repository Structure
Kaggle Tabular Data.ipynb,
Main notebook with full pipeline

Example structure:
preprocessing,
visualization,
training,
submission generation

### Software Setup
* List all of the required packages.
* If not standard, provide or point to instruction for installing the packages.
* Describe how to install your package.
Required Libraries
pandas,
numpy,
matplotlib,
scikit-learn

### Data
Install with - pip install pandas numpy matplotlib scikit-learn

* Point to where they can download the data.
* Lead them through preprocessing steps, if necessary.
### Data
Source: Kaggle House Prices Competition
Files
train.csv
test.csv

### Training
Run all notebook cells sequentially,
Data cleaning,
Feature engineering,
Model training

* Describe how to train the model

#### Performance Evaluation

* Describe how to run the performance evaluation.


## Citations
### Performance Evaluation
Evaluated using -
RMSE,
R^2 score,
Validation set used for model assessment

* Provide any references.
### Data Visualization
<img width="571" height="453" alt="image" src="https://github.com/user-attachments/assets/6a62b2db-5614-4d83-a19a-3d916bf75b3a" />

<img width="571" height="453" alt="image" src="https://github.com/user-attachments/assets/b483b88a-dac0-42eb-938c-97643ea2ba22" />


| Feature | Before Scaling | After Scaling |
|------------|---------------|--------------|
| GrLivArea | 2500 | 0.8 |
| GarageCars | 2 | 0.3 |


| Dataset | RMSE | R² Score |
| -------------- | ---- | -------- |
| Training Set | 0.32 | 0.88 |
| Validation Set | 0.51 | 0.68 |


### Citations
Kaggle House Prices Dataset