Halcom 发表于 2017-4-8 20:15:59

K均值Kmeans聚类算法

Kmeans聚类算法——K均值聚类算法
百度网盘链接:
视频链接:http://pan.baidu.com/s/1jIwYfky
具体链接在halcom.cn论坛,联系人QQ:3283892722
该论坛是一个学习交流平台,我会逐一的和大家分享学习。
欢迎大家录制视频,并提交给我,我来设置视频,你可在论坛进行打赏分享。
视频专用播放器:http://halcom.cn/forum.php?mod=viewthread&tid=258&extra=page%3D1

使用环境:Win7-32bit-Anaconda2-4.3.1-Windows-x86.exe具体的代码如下:Kmeans_function.py子函数文件:
# -*- coding: utf-8 -*-
"""
Created on Sat Apr 08 22:19:29 2017

@author: ysw
"""
from numpy import *
#import time
import matplotlib.pyplot as plt

# calculate Euclidean distance
def euclDistance(vector1, vector2):
    return sqrt(sum(power(vector2 - vector1, 2)))

# init centroids with random samples
def initCentroids(dataSet, k):
    numSamples, dim = dataSet.shape
    centroids = zeros((k, dim))
    for i in range(k):
      index = int(random.uniform(0, numSamples))
      centroids = dataSet
    return centroids

# k-means cluster
def kmeans(dataSet, k):
    numSamples = dataSet.shape
    # first column stores which cluster this sample belongs to,
    # second column stores the error between this sample and its centroid
    clusterAssment = mat(zeros((numSamples, 2)))
    clusterChanged = True

    ## step 1: init centroids
    centroids = initCentroids(dataSet, k)

    while clusterChanged:
      clusterChanged = False
      ## for each sample
      for i in xrange(numSamples):
            minDist= 100000000.0
            minIndex = 0
            ## for each centroid
            ## step 2: find the centroid who is closest
            for j in range(k):
                distance = euclDistance(centroids, dataSet)
                if distance < minDist:
                  minDist= distance
                  minIndex = j
            
            ## step 3: update its cluster
            if clusterAssment != minIndex:
                clusterChanged = True
                clusterAssment = minIndex, minDist**2

      ## step 4: update centroids
      for j in range(k):
            pointsInCluster = dataSet.A == j)]
            centroids = mean(pointsInCluster, axis = 0)

    print 'Congratulations, cluster complete!'
    return centroids, clusterAssment

# show your cluster only available with 2-D data
def showCluster(dataSet, k, centroids, clusterAssment):
    numSamples, dim = dataSet.shape
    if dim != 2:
      print "Sorry! I can not draw because the dimension of your data is not 2!"
      return 1

    mark = ['or', 'ob', 'og', 'ok', '^r', '+r', 'sr', 'dr', '<r', 'pr']
    if k > len(mark):
      print "Sorry! Your k is too large!"
      return 1

    # draw all samples
    for i in xrange(numSamples):
      markIndex = int(clusterAssment)
      plt.plot(dataSet, dataSet, mark)

    mark = ['Dr', 'Db', 'Dg', 'Dk', '^b', '+b', 'sb', 'db', '<b', 'pb']
    # draw the centroids
    for i in range(k):
      plt.plot(centroids, centroids, mark, markersize = 12)

    plt.show()Kmeans_main.py主函数程序:
from numpy import *
#import time
#import matplotlib.pyplot as plt
import Kmeans_function

## step 1: load data
print "step 1: load data..."
dataSet = []
fileIn = open(r'C:\Users\ysw\Desktop\Python(x,y)2.7.10\testSet.txt')
for line in fileIn.readlines():
    lineArr = line.strip().split()
    dataSet.append(), float(lineArr)])

## step 2: clustering...
print "step 2: clustering..."
dataSet = mat(dataSet)
k = 4
centroids, clusterAssment = Kmeans_function.kmeans(dataSet, k)

## step 3: show the result
print "step 3: show the result..."
Kmeans_function.showCluster(dataSet, k, centroids, clusterAssment)


参考链接:http://blog.csdn.net/zouxy09/article/details/17589329
http://halcom.cn/forum.php?mod=viewthread&tid=2770&extra=



页: [1]
查看完整版本: K均值Kmeans聚类算法