使用机器学习预测茶叶销售:线性回归、梯度下降与正则化(适合初学者 + 代码)

发布: (2025年12月20日 GMT+8 23:33)
9 min read
原文: Dev.to

Source: Dev.to

请提供您希望翻译的完整文本内容,我将按照要求保留源链接并翻译其余部分。

📚 你将学到

  • 线性回归(茶叶销量 vs. 温度)
  • 损失函数(预测错误程度)
  • 梯度下降(逐步改进方法)
  • 过拟合(记忆 vs. 学习模式)
  • 正则化(保持模型简洁)
  • 正则化损失函数(Ridge/Lasso)
  • 使用 NumPyscikit‑learn 的实用代码示例

🧪 设置(先运行这些)

# Install if needed:
# pip install numpy pandas scikit-learn matplotlib

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

np.random.seed(42)

⭐ 场景 1 – 线性回归(茶叶销量 vs. 温度)

想法: 温度越低 → 茶叶销量越高。绘制一条直线以根据温度预测销量。

# Synthetic dataset: temperature (°C) → tea cups sold
temps = np.array([10, 12, 15, 18, 20, 22, 24, 26, 28]).reshape(-1, 1)
tea_sales = np.array([100, 95, 85, 70, 60, 55, 50, 45, 40])

# Fit a basic linear regression
lin = LinearRegression()
lin.fit(temps, tea_sales)

print("Slope (m):", lin.coef_[0])          # cups change per 1 °C
print("Intercept (c):", lin.intercept_)   # base demand when temp = 0 °C

# Predict for tomorrow (e.g., 21 °C)
tomorrow_temp = np.array([[21]])
pred_sales = lin.predict(tomorrow_temp)
print("Predicted tea cups at 21 °C:", int(pred_sales[0]))

# Plot
plt.scatter(temps, tea_sales, color="teal", label="Actual")
plt.plot(temps, lin.predict(temps), color="orange", label="Fitted line")
plt.xlabel("Temperature (°C)")
plt.ylabel("Tea cups sold")
plt.title("Linear Regression: Tea Sales vs. Temperature")
plt.legend()
plt.show()

⭐ 场景 2 – 成本函数(衡量错误)

思路: 成本是平方误差的平均值——大错误的惩罚更大。

def mse(y_true, y_pred):
    return np.mean((y_true - y_pred) ** 2)

y_pred = lin.predict(temps)
print("Mean Squared Error (MSE):", mse(tea_sales, y_pred))

⭐ 场景 3 – 梯度下降(逐步改进)

思路: 逐渐调整斜率 m 和截距 c 以降低成本——就像调配茶叶配方一样。

# Gradient Descent for y = m*x + c (from scratch)
X = temps.flatten()
y = tea_sales.astype(float)

m, c = 0.0, 0.0          # initial guesses
lr = 0.0005              # learning rate (step size)
epochs = 5000

def predictions(m, c, X):
    return m * X + c

def gradients(m, c, X, y):
    y_hat = predictions(m, c, X)
    dm = (-2 / len(X)) * np.sum(X * (y - y_hat))
    dc = (-2 / len(X)) * np.sum(y - y_hat)
    return dm, dc

history = []
for _ in range(epochs):
    dm, dc = gradients(m, c, X, y)
    m -= lr * dm
    c -= lr * dc
    history.append(mse(y, predictions(m, c, X)))

print(f"GD learned slope m={m:.3f}, intercept c={c:.3f}, final MSE={history[-1]:.2f}")

# Plot loss curve
plt.plot(history)
plt.xlabel("Epoch")
plt.ylabel("MSE (Cost)")
plt.title("Gradient Descent: Cost vs. Epochs")
plt.show()

提示: 如果 lr 太大,损失会反弹或爆炸。如果太小,学习会非常慢。

⭐ 场景 4 – 过拟合(记忆噪声)

我们将模拟一个包含 有用噪声 特征的更丰富数据集。

# Build a dataset with signal + noise
n = 300
temp      = np.random.uniform(5, 35, size=n)               # useful
rain      = np.random.binomial(1, 0.3, size=n)             # somewhat useful
festival  = np.random.binomial(1, 0.1, size=n)             # sometimes useful
traffic   = np.random.normal(0, 1, size=n)                # weak/noisy
dog_barks = np.random.normal(0, 1, size=n)                # pure noise

# True relationship (unknown to the model)
true_sales = (120 - 2.5 * temp + 10 * rain + 15 * festival
              + 1.0 * np.random.normal(0, 3, size=n))   # added noise

# Feature matrix
X = np.column_stack([temp, rain, festival, traffic, dog_barks])
feature_names = ["temp", "rain", "festival", "traffic", "dog_barks"]

X_train, X_test, y_train, y_test = train_test_split(
    X, true_sales, test_size=0.25, random_state=42
)

# Plain Linear Regression (can overfit)
lr_model = LinearRegression()
lr_model.fit(X_train, y_train)

print("Linear Regression Coefficients:")
for name, coef in zip(feature_names, lr_model.coef_):
    print(f"  {name}: {coef:.3f}")

print("Train MSE:", mean_squared_error(y_train, lr_model.predict(X_train)))
print("Test  MSE:", mean_squared_error(y_test,  lr_model.predict(X_test)))

如果你看到 明显噪声特征(例如 dog_barks)的 系数异常大,或 训练集 MSE 远低于测试集 MSE,这就是 过拟合

⭐ 场景 5 – 解决过拟合

策略

  1. 移除无用特征(手动特征选择)。
  2. 获取更多数据(经典的解决办法)。
  3. 使用正则化(对大权重的系统惩罚)。

⭐ 场景 6 – 正则化(复杂度惩罚)

正则化向成本添加一个惩罚项,以缩小大的系数——就像告诉你的茶师使用更少的配料或失去奖励。

⭐ 场景 7 – 正则化线性回归(Ridge 与 Lasso)

# Ridge (L2) – penalizes squared weights
ridge = Ridge(alpha=1.0)          # alpha = regularization strength
ridge.fit(X_train, y_train)

# Lasso (L1) – penalizes absolute weights, can zero‑out features
lasso = Lasso(alpha=0.5, max_iter=10000)
lasso.fit(X_train, y_train)

def show_results(model, name):
    print(f"\n{name} Coefficients:")
    for feat, coef in zip(feature_names, model.coef_):
        print(f"  {feat}: {coef:.3f}")
    train_mse = mean_squared_error(y_train, model.predict(X_train))
    test_mse  = mean_squared_error(y_test,  model.predict(X_test))
    print(f"Train MSE: {train_mse:.2f}")
    print(f"Test  MSE: {test_mse:.2f}")

show_results(ridge, "Ridge")
show_results(lasso, "Lasso")

观察要点

模型对系数的影响典型结果
Ridge将所有系数收缩至接近零,但仍保留所有系数降低方差,提高测试集表现
Lasso可以将部分系数精确压为零同时实现正则化特征选择

🎉 Wrap‑Up

  • Linear regression 提供一个简单、可解释的模型。
  • Cost (MSE) 用于量化预测误差。
  • Gradient descent 通过迭代最小化该成本。
  • Overfitting 发生在模型记忆噪声时。
  • Regularization(Ridge/Lasso)通过惩罚大权重来抑制过拟合。

现在你拥有一个完整、可运行的笔记本式指南,将茶摊直觉与现实世界的机器学习实践相结合。祝建模愉快!

ss 特征

# Ridge: L2 penalty
ridge = Ridge(alpha=10.0)   # alpha = λ (higher = stronger penalty)
ridge.fit(X_train, y_train)

print("\nRidge Coefficients (alpha=10):")
for name, coef in zip(feature_names, ridge.coef_):
    print(f"  {name}: {coef:.3f}")

print("Ridge Train MSE:", mean_squared_error(y_train, ridge.predict(X_train)))
print("Ridge Test  MSE:", mean_squared_error(y_test,  ridge.predict(X_test)))

# Lasso: L1 penalty
lasso = Lasso(alpha=1.0)    # try different alphas like 0.1, 0.5, 2.0
lasso.fit(X_train, y_train)

print("\nLasso Coefficients (alpha=1.0):")
for name, coef in zip(feature_names, lasso.coef_):
    print(f"  {name}: {coef:.3f}")

print("Lasso Train MSE:", mean_squared_error(y_train, lasso.predict(X_train)))
print("Lasso Test  MSE:", mean_squared_error(y_test,  lasso.predict(X_test)))

观察要点

  • Ridge 应该 收缩 噪声系数,使其更接近零。
  • Lasso 可能会将真正无用的特征精确设为 (特征选择)。
  • 测试 MSE 应该比普通线性回归更好。

⭐ 场景 8:正则化如何解决过拟合(深入探讨)

让我们比较不同惩罚下的模型,并 可视化系数收缩

alphas = [0.0, 0.1, 1.0, 10.0, 50.0]  # 0.0 ~ 为了对比的普通线性回归
coef_paths_ridge = []
train_mse_ridge, test_mse_ridge = [], []

for a in alphas:
    if a == 0.0:
        model = LinearRegression()
    else:
        model = Ridge(alpha=a)
    model.fit(X_train, y_train)
    coef_paths_ridge.append(model.coef_)
    train_mse_ridge.append(mean_squared_error(y_train, model.predict(X_train)))
    test_mse_ridge.append(mean_squared_error(y_test, model.predict(X_test)))

coef_paths_ridge = np.array(coef_paths_ridge)

# 绘制 Ridge 系数路径
plt.figure(figsize=(8, 5))
for i, name in enumerate(feature_names):
    plt.plot(alphas, coef_paths_ridge[:, i], marker="o", label=name)
plt.xscale("log")
plt.xlabel("alpha (log scale)")
plt.ylabel("Coefficient value")
plt.title("Ridge: Coefficient Shrinkage with Increasing Penalty")
plt.legend()
plt.show()

# 绘制 Ridge 的训练 vs 测试 MSE
plt.figure(figsize=(8, 5))
plt.plot(alphas, train_mse_ridge, marker="o", label="Train MSE")
plt.plot(alphas, test_mse_ridge, marker="o", label="Test MSE")
plt.xscale("log")
plt.xlabel("alpha (log scale)")
plt.ylabel("MSE")
plt.title("Ridge: Train vs Test MSE Across Penalties")
plt.legend()
plt.show()

解释

  • 低 alpha 时,系数保持较大 → 存在过拟合风险(训练 MSE 低,测试 MSE 较高)。
  • 随着 alpha 增大,系数收缩 → 模型更简洁,泛化能力提升。
  • 如果 alpha 过高,模型会变得 过于简单 → 欠拟合(两者 MSE 都上升)。
  • 寻找 测试 MSE 最低的 alpha —— 那就是最佳平衡点。

🧠 奖励:简易茶叶预测函数

def forecast_tea_cups(temp_c, rain=0, festival=0, model=ridge):
    """Quick helper using your fitted model (default: ridge)."""
    x = np.array([[temp_c, rain, festival, 0.0, 0.0]])  # ignore traffic/dog_barks at prediction time
    return float(model.predict(x)[0])

print("Forecast for 18°C, raining, festival day:",
      round(forecast_tea_cups(18, rain=1, festival=1)))
print("Forecast for 30°C, no rain, normal day:",
      round(forecast_tea_cups(30, rain=0, festival=0)))

✅ Final Takeaways

  • Linear Regression: 在特征与目标之间绘制最佳直线。
  • Cost Function (MSE): 对预测误差进行惩罚,尤其是大的误差。
  • Gradient Descent: 通过迭代方式改进参数,以最小化代价。
  • Overfitting: 模型学习到噪声;在训练集上表现很好,但在新数据上表现差。
  • Regularization (Ridge/Lasso): 收缩权重,去除噪声,提高泛化能力。
  • Choose α (lambda) carefully: 过小 → 过拟合;过大 → 欠拟合。
Back to Blog

相关文章

阅读更多 »