python实现的文件同步服务器实例


Posted in Python onJune 02, 2015

本文实例讲述了python实现的文件同步服务器。分享给大家供大家参考。具体实现方法如下:

服务端使用asyncore, 收到文件后保存到本地。

客户端使用pyinotify监视目录的变化 ,把变动的文件发送到服务端。

重点:

1. 使用structs打包发送文件的信息,服务端收到后,根据文件信息来接收客户端传送过来的文件。

2. 客户端使用多线程,pyinotify监视到文件变化,放到队列中,由另外一个线程发送。

上代码:

服务端:

# receive file from client and store them into file use asyncore.# 
#/usr/bin/python 
#coding: utf-8 
import asyncore 
import socket 
from socket import errno 
import logging 
import time 
import sys 
import struct 
import os 
import fcntl 
import threading 
from rrd_graph import MakeGraph 
try: 
  import rrdtool 
except (ImportError, ImportWarnning): 
  print "Hope this information can help you:" 
  print "Can not find pyinotify module in sys path, just run [apt-get install python-rrdtool] in ubuntu." 
  sys.exit(1) 
class RequestHandler(asyncore.dispatcher): 
  def __init__(self, sock, map=None, chunk_size=1024): 
    self.logger = logging.getLogger('%s-%s' % (self.__class__.__name__, str(sock.getsockname()))) 
    self.chunk_size = chunk_size 
    asyncore.dispatcher.__init__(self,sock,map) 
    self.data_to_write = list() 
  def readable(self): 
    #self.logger.debug("readable() called.") 
    return True 
  def writable(self): 
    response = (not self.connected) or len(self.data_to_write) 
    #self.logger.debug('writable() -> %s data length -> %s' % (response, len(self.data_to_write))) 
    return response 
  def handle_write(self): 
    data = self.data_to_write.pop() 
    #self.logger.debug("handle_write()->%s size: %s",data.rstrip('\r\n'),len(data)) 
    sent = self.send(data[:self.chunk_size]) 
    if sent < len(data): 
      remaining = data[sent:] 
      self.data_to_write.append(remaining) 
  def handle_read(self): 
    self.writen_size = 0 
    nagios_perfdata = '../perfdata' 
    head_packet_format = "!LL128s128sL" 
    head_packet_size = struct.calcsize(head_packet_format) 
    data = self.recv(head_packet_size) 
    if not data: 
      return 
    filepath_len, filename_len, filepath,filename, filesize = struct.unpack(head_packet_format,data) 
    filepath = os.path.join(nagios_perfdata, filepath[:filepath_len]) 
    filename = filename[:filename_len] 
    self.logger.debug("update file: %s" % filepath + '/' + filename)
    try: 
      if not os.path.exists(filepath): 
        os.makedirs(filepath) 
    except OSError: 
      pass 
    self.fd = open(os.path.join(filepath,filename), 'w') 
    #self.fd = open(filename,'w') 
    if filesize > self.chunk_size: 
      times = filesize / self.chunk_size 
      first_part_size = times * self.chunk_size 
      second_part_size = filesize % self.chunk_size 
      while 1: 
        try: 
          data = self.recv(self.chunk_size) 
          #self.logger.debug("handle_read()->%s size.",len(data)) 
        except socket.error,e: 
          if e.args[0] == errno.EWOULDBLOCK: 
            print "EWOULDBLOCK" 
            time.sleep(1) 
          else: 
            #self.logger.debug("Error happend while receive data: %s" % e) 
            break 
        else: 
          self.fd.write(data) 
          self.fd.flush() 
          self.writen_size += len(data) 
          if self.writen_size == first_part_size: 
            break 
      #receive the packet at last 
      while 1: 
        try: 
          data = self.recv(second_part_size) 
          #self.logger.debug("handle_read()->%s size.",len(data)) 
        except socket.error,e: 
          if e.args[0] == errno.EWOULDBLOCK: 
            print "EWOULDBLOCK" 
            time.sleep(1) 
          else: 
            #self.logger.debug("Error happend while receive data: %s" % e) 
            break 
        else: 
          self.fd.write(data) 
          self.fd.flush() 
          self.writen_size += len(data) 
          if len(data) == second_part_size: 
            break 
    elif filesize <= self.chunk_size: 
      while 1: 
        try: 
          data = self.recv(filesize) 
          #self.logger.debug("handle_read()->%s size.",len(data)) 
        except socket.error,e: 
          if e.args[0] == errno.EWOULDBLOCK: 
            print "EWOULDBLOCK" 
            time.sleep(1) 
          else: 
            #self.logger.debug("Error happend while receive data: %s" % e) 
            break 
        else: 
          self.fd.write(data) 
          self.fd.flush() 
          self.writen_size += len(data) 
          if len(data) == filesize: 
            break 
    self.logger.debug("File size: %s" % self.writen_size) 
class SyncServer(asyncore.dispatcher): 
  def __init__(self,host,port): 
    asyncore.dispatcher.__init__(self) 
    self.debug = True 
    self.logger = logging.getLogger(self.__class__.__name__) 
    self.create_socket(socket.AF_INET,socket.SOCK_STREAM) 
    self.set_reuse_addr() 
    self.bind((host,port)) 
    self.listen(2000) 
  def handle_accept(self): 
    client_socket = self.accept() 
    if client_socket is None: 
      pass 
    else: 
      sock, addr = client_socket 
      #self.logger.debug("Incoming connection from %s" % repr(addr)) 
      handler = RequestHandler(sock=sock) 
class RunServer(threading.Thread): 
  def __init__(self): 
    super(RunServer,self).__init__() 
    self.daemon = False 
  def run(self): 
    server = SyncServer('',9999) 
    asyncore.loop(use_poll=True) 
def StartServer(): 
  logging.basicConfig(level=logging.DEBUG, 
            format='%(name)s: %(message)s', 
            ) 
  RunServer().start() 
  #MakeGraph().start() 
if __name__ == '__main__': 
  StartServer()

客户端:

# monitor path with inotify(python module), and send them to remote server.# 
# use sendfile(2) instead of send function in socket, if we have python-sendfile installed.# 
import socket 
import time 
import os 
import sys 
import struct 
import threading 
import Queue 
try: 
   import pyinotify 
except (ImportError, ImportWarnning): 
   print "Hope this information can help you:" 
   print "Can not find pyinotify module in sys path, just run [apt-get install python-pyinotify] in ubuntu." 
   sys.exit(1) 
try: 
   from sendfile import sendfile 
except (ImportError,ImportWarnning): 
   pass 
filetype_filter = [".rrd",".xml"] 
def check_filetype(pathname): 
   for suffix_name in filetype_filter: 
     if pathname[-4:] == suffix_name: 
       return True 
   try: 
     end_string = pathname.rsplit('.')[-1:][0] 
     end_int = int(end_string) 
   except: 
     pass 
   else: 
     # means pathname endwith digit 
     return False 
class sync_file(threading.Thread): 
   def __init__(self, addr, events_queue): 
     super(sync_file,self).__init__() 
     self.daemon = False 
     self.queue = events_queue 
     self.addr = addr 
     self.chunk_size = 1024 
   def run(self): 
     while 1: 
       event = self.queue.get() 
       if check_filetype(event.pathname): 
         print time.asctime(),event.maskname, event.pathname 
         filepath = event.path.split('/')[-1:][0] 
         filename = event.name 
         filesize = os.stat(os.path.join(event.path, filename)).st_size 
         sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 
         filepath_len = len(filepath) 
         filename_len = len(filename) 
         sock.connect(self.addr) 
         offset = 0 
         data = struct.pack("!LL128s128sL",filepath_len, filename_len, filepath,filename,filesize) 
         fd = open(event.pathname,'rb') 
         sock.sendall(data) 
         if "sendfile" in sys.modules: 
           # print "use sendfile(2)" 
           while 1: 
             sent = sendfile(sock.fileno(), fd.fileno(), offset, self.chunk_size) 
             if sent == 0: 
               break 
             offset += sent 
         else: 
           # print "use original send function" 
           while 1: 
             data = fd.read(self.chunk_size) 
             if not data: break 
             sock.send(data) 
         sock.close() 
         fd.close() 
class EventHandler(pyinotify.ProcessEvent): 
   def __init__(self, events_queue): 
     super(EventHandler,self).__init__() 
     self.events_queue = events_queue 
   def my_init(self): 
     pass 
   def process_IN_CLOSE_WRITE(self,event): 
     self.events_queue.put(event) 
   def process_IN_MOVED_TO(self,event): 
     self.events_queue.put(event) 
def start_notify(path, mask, sync_server): 
   events_queue = Queue.Queue() 
   sync_thread_pool = list() 
   for i in range(500): 
     sync_thread_pool.append(sync_file(sync_server, events_queue)) 
   for i in sync_thread_pool: 
     i.start() 
   wm = pyinotify.WatchManager() 
   notifier = pyinotify.Notifier(wm,EventHandler(events_queue)) 
   wdd = wm.add_watch(path,mask,rec=True) 
   notifier.loop() 
def do_notify(): 
   perfdata_path = '/var/lib/pnp4nagios/perfdata' 
   mask = pyinotify.IN_CLOSE_WRITE|pyinotify.IN_MOVED_TO 
   sync_server = ('127.0.0.1',9999) 
   start_notify(perfdata_path,mask,sync_server) 
if __name__ == '__main__': 
   do_notify()

python监视线程池

#!/usr/bin/python 
import threading 
import time 
class Monitor(threading.Thread): 
  def __init__(self, *args,**kwargs): 
    super(Monitor,self).__init__() 
    self.daemon = False 
    self.args = args 
    self.kwargs = kwargs 
    self.pool_list = [] 
  def run(self): 
    print self.args 
    print self.kwargs 
    for name,value in self.kwargs.items(): 
      obj = value[0] 
      temp = {} 
      temp[name] = obj 
      self.pool_list.append(temp) 
    while 1: 
      print self.pool_list 
      for name,value in self.kwargs.items(): 
        obj = value[0] 
        parameters = value[1:] 
        died_threads = self.cal_died_thread(self.pool_list,name)
        print "died_threads", died_threads 
        if died_threads >0: 
          for i in range(died_threads): 
            print "start %s thread..." % name 
            t = obj[0].__class__(*parameters) 
            t.start() 
            self.add_to_pool_list(t,name) 
        else: 
          break 
      time.sleep(0.5) 
  def cal_died_thread(self,pool_list,name): 
    i = 0 
    for item in self.pool_list: 
      for k,v in item.items(): 
        if name == k: 
          lists = v 
    for t in lists: 
      if not t.isAlive(): 
        self.remove_from_pool_list(t) 
        i +=1 
    return i 
  def add_to_pool_list(self,obj,name): 
    for item in self.pool_list: 
      for k,v in item.items(): 
        if name == k: 
          v.append(obj) 
  def remove_from_pool_list(self, obj): 
    for item in self.pool_list: 
      for k,v in item.items(): 
        try: 
          v.remove(obj) 
        except: 
          pass 
        else: 
          return

使用方法:

rrds_queue = Queue.Queue() 
  make_rrds_pool = [] 
  for i in range(5): 
    make_rrds_pool.append(MakeRrds(rrds_queue)) 
  for i in make_rrds_pool: 
    i.start() 
  make_graph_pool = [] 
  for i in range(5): 
    make_graph_pool.append(MakeGraph(rrds_queue)) 
  for i in make_graph_pool: 
    i.start() 
  monitor = Monitor(make_rrds_pool=(make_rrds_pool, rrds_queue), \ 
           make_graph_pool=(make_graph_pool, rrds_queue)) 
  monitor.start()

解析:

1. 接受字典参数,value为一个元组,第一个元素是线程池,后面的都是参数。
2. 每0.5秒监视线程池中的线程数量,如果线程死掉了,记录死掉线程的数目,再启动同样数量的线程。
3. 如果没有线程死去,则什么也不做。

从外部调用Django模块

import os 
import sys 
sys.path.insert(0,'/data/cloud_manage') 
from django.core.management import setup_environ 
import settings 
setup_environ(settings) 
from common.monitor import Monitor 
from django.db import connection, transaction

前提就是,要新建一个django的project,这里我们新建了一个cloud_manage.
这样不仅可以调用django自身的模块,还能调用project本身的东西。

希望本文所述对大家的Python程序设计有所帮助。

Python 相关文章推荐
python中实现定制类的特殊方法总结
Sep 28 Python
python静态方法实例
Jan 14 Python
Python语言描述机器学习之Logistic回归算法
Dec 21 Python
python3.5 tkinter实现页面跳转
Jan 30 Python
python下载微信公众号相关文章
Feb 26 Python
在Pytorch中计算卷积方法的区别详解(conv2d的区别)
Jan 03 Python
Python函数式编程实例详解
Jan 17 Python
关于Python 中的时间处理包datetime和arrow的方法详解
Mar 19 Python
python实现坦克大战
Apr 24 Python
MATLAB数学建模之画图汇总
Jul 16 Python
python 如何用map()函数创建多线程任务
Apr 07 Python
Anaconda安装pytorch和paddle的方法步骤
Apr 03 Python
Python中for循环控制语句用法实例
Jun 02 #Python
python获取目录下所有文件的方法
Jun 01 #Python
Python常用小技巧总结
Jun 01 #Python
Python获取运行目录与当前脚本目录的方法
Jun 01 #Python
Python运算符重载用法实例分析
Jun 01 #Python
python使用Image处理图片常用技巧分析
Jun 01 #Python
python实现图片变亮或者变暗的方法
Jun 01 #Python
You might like
phpmyadmin的#1251问题
2006/11/25 PHP
php md5下16位和32位的实现代码
2008/04/09 PHP
file_get_contents获取不到网页内容的解决方法
2013/03/07 PHP
一个显示效果非常不错的PHP错误、异常处理类
2014/03/21 PHP
PHP根据session与cookie用户登录状态操作类的代码
2016/05/13 PHP
javascript 学习笔记(八)javascript对象
2011/04/12 Javascript
asm.js使用示例代码
2013/11/28 Javascript
javascript中JSON对象与JSON字符串相互转换实例
2015/07/11 Javascript
js获取指定字符前/后的字符串简单实例
2016/10/27 Javascript
JS实现的适合做faq或menu滑动效果示例
2016/11/17 Javascript
jQuery Form表单取值的方法
2017/01/11 Javascript
微信小程序登录换取token的教程
2018/05/31 Javascript
如何手动实现es5中的bind方法详解
2018/12/07 Javascript
浅谈vue中使用编辑器vue-quill-editor踩过的坑
2020/08/03 Javascript
JS跨浏览器解析XML应用过程详解
2020/10/16 Javascript
[01:45]DOTA2众星出演!DSPL刀塔次级职业联赛宣传片
2014/11/21 DOTA
举例讲解Django中数据模型访问外键值的方法
2015/07/21 Python
[原创]python爬虫(入门教程、视频教程)
2018/01/08 Python
python用户评论标签匹配的解决方法
2018/05/31 Python
详解将Django部署到Centos7全攻略
2018/09/26 Python
Pandas统计重复的列里面的值方法
2019/01/30 Python
numpy基础教程之np.linalg
2019/02/12 Python
Python实现的统计文章单词次数功能示例
2019/07/08 Python
python使用Geany编辑器配置方法
2020/02/21 Python
澳大利亚领先的宠物用品商店:VetSupply
2017/09/08 全球购物
美国全球旅游运营商:Pacific Holidays
2018/06/18 全球购物
英国玛莎百货澳大利亚:Marks & Spencer Australia
2019/08/30 全球购物
经理管理专业自荐信范文
2013/12/31 职场文书
公职人员索取回扣检举信
2014/04/04 职场文书
三八红旗集体先进事迹材料
2014/05/22 职场文书
大学生就业求职信
2014/06/12 职场文书
开业典礼致辞
2015/07/29 职场文书
2015年七夕情人节感言
2015/08/03 职场文书
三好学生竞选稿
2015/11/21 职场文书
小程序wx.getUserProfile接口的具体使用
2021/06/02 Javascript
Meta增速拉垮,元宇宙难当重任
2022/04/29 数码科技