在线观看不卡亚洲电影_亚洲妓女99综合网_91青青青亚洲娱乐在线观看_日韩无码高清综合久久

鍍金池/ 問答
耍太極 回答

應(yīng)該是你使用promise的問題,推測是沒有寫catch。

萌二代 回答

html + css 單獨(dú)實(shí)現(xiàn)吧

改成這樣吧

var temp='<%=JSON.stringify(server_data)%>'
紓惘 回答

大致是這種意思 下級菜單的html放在上層的內(nèi)部

<div onmouseover="$('#menu1').show()" onmouseout="$('#menu1').hide()">
    <div>導(dǎo)航圖標(biāo)</div>
    <div id="menu1" onmouseover="$('#menu2').show()" onmouseout="$('#menu2').hide()">
        <div id="menu2"></div>
    </div>
</div>
離殤 回答

建議去了解一下:關(guān)鍵字《網(wǎng)絡(luò)游戲同步方式(幀同步和狀態(tài)同步)》

不討囍 回答

csdn上面的,直接搬了過來:

因?yàn)橐鲇^點(diǎn),觀點(diǎn)的屋子類似于知乎的話題,所以得想辦法把他給爬下來,搞了半天最終還是妥妥的搞定了,代碼是python寫的,不懂得麻煩自學(xué)哈!懂得直接看代碼,絕對可用


#coding:utf-8
"""
@author:haoning
@create time:2015.8.5
"""
from __future__ import division  # 精確除法
from Queue import Queue
from __builtin__ import False
import json
import os
import re
import platform
import uuid
import urllib
import urllib2
import sys
import time
import MySQLdb as mdb
from bs4 import BeautifulSoup


reload(sys)
sys.setdefaultencoding( "utf-8" )


headers = {
   'User-Agent' : 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:35.0) Gecko/20100101 Firefox/35.0',
   'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8',
   'X-Requested-With':'XMLHttpRequest',
   'Referer':'https://www.zhihu.com/topics',
   'Cookie':'__utma=51854390.517069884.1416212035.1416212035.1416212035.1; q_c1=c02bf44d00d240798bfabcfc95baeb56|1455778173000|1416205243000; _za=b1c8ae35-f986-46a2-b24a-cb9359dc6b2a; aliyungf_tc=AQAAAJ1m71jL1woArKqF22VFnL/wRy6C; _xsrf=9d494558f9271340ab24598d85b2a3c8; cap_id="MDNiMjcwM2U0MTRhNDVmYjgxZWVhOWI0NTA2OGU5OTg=|1455864276|2a4ce8247ebd3c0df5393bb5661713ad9eec01dd"; n_c=1; _alicdn_sec=56c6ba4d556557d27a0f8c876f563d12a285f33a'
}


DB_HOST = '127.0.0.1'
DB_USER = 'root'
DB_PASS = 'root'


queue= Queue() #接收隊(duì)列
nodeSet=set()
keywordSet=set()
stop=0
offset=-20
level=0
maxLevel=7
counter=0
base=""


conn = mdb.connect(DB_HOST, DB_USER, DB_PASS, 'zhihu', charset='utf8')
conn.autocommit(False)
curr = conn.cursor()


def get_html(url):
    try:
        req = urllib2.Request(url)
        response = urllib2.urlopen(req,None,3) #在這里應(yīng)該加入代理
        html = response.read()
        return html
    except:
        pass
    return None


def getTopics():
    url = 'https://www.zhihu.com/topics'
    print url
    try:
        req = urllib2.Request(url)
        response = urllib2.urlopen(req) #鍦ㄨ繖閲屽簲璇ュ姞鍏ヤ唬鐞?
        html = response.read().decode('utf-8')
        print html
        soup = BeautifulSoup(html)
        lis = soup.find_all('li', {'class' : 'zm-topic-cat-item'})
        
        for li in lis:
            data_id=li.get('data-id')
            name=li.text
            curr.execute('select id from classify_new where name=%s',(name))
            y= curr.fetchone()
            if not y:
                curr.execute('INSERT INTO classify_new(data_id,name)VALUES(%s,%s)',(data_id,name))
        conn.commit()
    except Exception as e:
        print "get topic error",e
        


def get_extension(name):  
    where=name.rfind('.')
    if where!=-1:
        return name[where:len(name)]
    return None




def which_platform():
    sys_str = platform.system()
    return sys_str


def GetDateString():
    when=time.strftime('%Y-%m-%d',time.localtime(time.time()))
    foldername = str(when)
    return foldername 


def makeDateFolder(par,classify):
    try:
        if os.path.isdir(par):
            newFolderName=par + '//' + GetDateString() + '//'  +str(classify)
            if which_platform()=="Linux":
                newFolderName=par + '/' + GetDateString() + "/" +str(classify)
            if not os.path.isdir( newFolderName ):
                os.makedirs( newFolderName )
            return newFolderName
        else:
            return None 
    except Exception,e:
        print "kk",e
    return None 


def download_img(url,classify):
    try:
        extention=get_extension(url)
        if(extention is None):
            return None
        req = urllib2.Request(url)
        resp = urllib2.urlopen(req,None,3)
        dataimg=resp.read()
        name=str(uuid.uuid1()).replace("-","")+"_www.guandn.com"+extention
        top="E://topic_pic"
        folder=makeDateFolder(top, classify)
        filename=None
        if folder is not None:
            filename  =folder+"http://"+name
        try:
            if "e82bab09c_m" in str(url):
                return True
            if not os.path.exists(filename):
                file_object = open(filename,'w+b')
                file_object.write(dataimg)
                file_object.close()
                return '/room/default/'+GetDateString()+'/'+str(classify)+"/"+name
            else:
                print "file exist"
                return None
        except IOError,e1:
            print "e1=",e1
            pass
    except Exception as e:
        print "eee",e
        pass
    return None #如果沒有下載下來就利用原來網(wǎng)站的鏈接


def getChildren(node,name):
    global queue,nodeSet
    try:
        url="https://www.zhihu.com/topic/"+str(node)+"/hot"
        html=get_html(url)
        if html is None:
            return
        soup = BeautifulSoup(html)
        p_ch='父話題'
        node_name=soup.find('div', {'id' : 'zh-topic-title'}).find('h1').text
        topic_cla=soup.find('div', {'class' : 'child-topic'})
        if topic_cla is not None:
            try:
                p_ch=str(topic_cla.text)
                aList = soup.find_all('a', {'class' : 'zm-item-tag'}) #獲取所有子節(jié)點(diǎn)
                if u'子話題' in p_ch:
                    for a in aList:
                        token=a.get('data-token')
                        a=str(a).replace('\n','').replace('\t','').replace('\r','')
                        start=str(a).find('>')
                        end=str(a).rfind('</a>')
                        new_node=str(str(a)[start+1:end])
                        curr.execute('select id from rooms where name=%s',(new_node)) #先保證名字絕不相同
                        y= curr.fetchone()
                        if not y:
                            print "y=",y,"new_node=",new_node,"token=",token
                            queue.put((token,new_node,node_name))
            except Exception as e:
                print "add queue error",e
    except Exception as e:
        print "get html error",e
        
    


def getContent(n,name,p,top_id):
    try:
        global counter
        curr.execute('select id from rooms where name=%s',(name)) #先保證名字絕不相同
        y= curr.fetchone()
        print "exist?? ",y,"n=",n
        if not y:
            url="https://www.zhihu.com/topic/"+str(n)+"/hot"
            html=get_html(url)
            if html is None:
                return
            soup = BeautifulSoup(html)
            title=soup.find('div', {'id' : 'zh-topic-title'}).find('h1').text
            pic_path=soup.find('a',{'id':'zh-avartar-edit-form'}).find('img').get('src')
            description=soup.find('div',{'class':'zm-editable-content'})
            if description is not None:
                description=description.text
                
            if (u"未歸類" in title or u"根話題" in title): #允許入庫,避免死循環(huán)
                description=None
                
            tag_path=download_img(pic_path,top_id)
            print "tag_path=",tag_path
            if (tag_path is not None) or tag_path==True:
                if tag_path==True:
                    tag_path=None
                father_id=2 #默認(rèn)為雜談
                curr.execute('select id from rooms where name=%s',(p))
                results = curr.fetchall()
                for r in results:
                    father_id=r[0]
                name=title
                curr.execute('select id from rooms where name=%s',(name)) #先保證名字絕不相同
                y= curr.fetchone()
                print "store see..",y
                if not y:
                    friends_num=0
                    temp = time.time()
                    x = time.localtime(float(temp))
                    create_time = time.strftime("%Y-%m-%d %H:%M:%S",x) # get time now
                    create_time
                    creater_id=None
                    room_avatar=tag_path
                    is_pass=1
                    has_index=0
                    reason_id=None  
                    #print father_id,name,friends_num,create_time,creater_id,room_avatar,is_pass,has_index,reason_id
                    ######################有資格入庫的內(nèi)容
                    counter=counter+1
                    curr.execute("INSERT INTO rooms(father_id,name,friends_num,description,create_time,creater_id,room_avatar,is_pass,has_index,reason_id)VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)",(father_id,name,friends_num,description,create_time,creater_id,room_avatar,is_pass,has_index,reason_id))
                    conn.commit() #必須時(shí)時(shí)進(jìn)入數(shù)據(jù)庫,不然找不到父節(jié)點(diǎn)
                    if counter % 200==0:
                        print "current node",name,"num",counter
    except Exception as e:
        print "get content error",e       


def work():
    global queue
    curr.execute('select id,node,parent,name from classify where status=1')
    results = curr.fetchall()
    for r in results:
        top_id=r[0]
        node=r[1]
        parent=r[2]
        name=r[3]
        try:
            queue.put((node,name,parent)) #首先放入隊(duì)列
            while queue.qsize() >0:
                n,p=queue.get() #頂節(jié)點(diǎn)出隊(duì)
                getContent(n,p,top_id)
                getChildren(n,name) #出隊(duì)內(nèi)容的子節(jié)點(diǎn)
            conn.commit()
        except Exception as e:
            print "what's wrong",e  
            
def new_work():
    global queue
    curr.execute('select id,data_id,name from classify_new_copy where status=1')
    results = curr.fetchall()
    for r in results:
        top_id=r[0]
        data_id=r[1]
        name=r[2]
        try:
            get_topis(data_id,name,top_id)
        except:
            pass




def get_topis(data_id,name,top_id):
    global queue
    url = 'https://www.zhihu.com/node/TopicsPlazzaListV2'
    isGet = True;
    offset = -20;
    data_id=str(data_id)
    while isGet:
        offset = offset + 20
        values = {'method': 'next', 'params': '{"topic_id":'+data_id+',"offset":'+str(offset)+',"hash_id":""}'}
        try:
            msg=None
            try:
                data = urllib.urlencode(values)
                request = urllib2.Request(url,data,headers)
                response = urllib2.urlopen(request,None,5)
                html=response.read().decode('utf-8')
                json_str = json.loads(html)
                ms=json_str['msg']
                if len(ms) <5:
                    break
                msg=ms[0]
            except Exception as e:
                print "eeeee",e
            #print msg
            if msg is not None:
                soup = BeautifulSoup(str(msg))
                blks = soup.find_all('div', {'class' : 'blk'})
                for blk in blks:
                    page=blk.find('a').get('href')
                    if page is not None:
                        node=page.replace("/topic/","") #將更多的種子入庫
                        parent=name
                        ne=blk.find('strong').text
                        try:
                            queue.put((node,ne,parent)) #首先放入隊(duì)列
                            while queue.qsize() >0:
                                n,name,p=queue.get() #頂節(jié)點(diǎn)出隊(duì)
                                size=queue.qsize()
                                if size > 0:
                                    print size
                                getContent(n,name,p,top_id)
                                getChildren(n,name) #出隊(duì)內(nèi)容的子節(jié)點(diǎn)
                            conn.commit()
                        except Exception as e:
                            print "what's wrong",e  
        except urllib2.URLError, e:
            print "error is",e
            pass 
            
        
if __name__ == '__main__':
    i=0
    while i<400:
        new_work()
        i=i+1

說下數(shù)據(jù)庫的問題,我這里就不傳附件了,看字段自己建立,因?yàn)檫@確實(shí)太簡單了,我是用的mysql,你看自己的需求自己建。

有什么不懂得麻煩去去轉(zhuǎn)盤網(wǎng)找我,因?yàn)檫@個(gè)也是我開發(fā)的,上面會及時(shí)更新qq群號,這里不留qq號啥的,以免被系統(tǒng)給K了。

大濕胸 回答

使用javascript來做

const img = new Image();
img.onload = ()=>{
  img.width = img.width / 4;
  //其他渲染dom的代碼
};
img.src = '圖片地址';
魚梓 回答

圖片描述
這是我的阿里云虛擬主機(jī)和數(shù)據(jù)庫,可以看到他們地址不相同,
然而我用phpstudy時(shí),他的服務(wù)器文件和數(shù)據(jù)庫文件是在一個(gè)根目錄下的不同文件夾的,
這就讓我和疑惑了。。。

服務(wù)器和數(shù)據(jù)庫之間存取數(shù)據(jù)相當(dāng)于下面哪種情況?

  • C盤目錄foo目錄bar之間復(fù)制
  • C盤D盤之間復(fù)制
  • 不同電腦之間復(fù)制
嫑吢丕 回答

看了源碼,是1.8.4版本的bug,之前版本是seaslog.level默認(rèn)為0,記錄所有日志
1.8.4版本的順序是反過來的,0為僅記錄緊急日志,建議仍使用之前的版本,等作者修復(fù)后再使用
配置文件中的seaslog.level并不能改變SeasLog的日志級別

1.8.4 日志級別
#define SEASLOG_ALL_INT                     8
#define SEASLOG_DEBUG_INT                   7
#define SEASLOG_INFO_INT                    6
#define SEASLOG_NOTICE_INT                  5
#define SEASLOG_WARNING_INT                 4
#define SEASLOG_ERROR_INT                   3
#define SEASLOG_CRITICAL_INT                2
#define SEASLOG_ALERT_INT                   1
#define SEASLOG_EMERGENCY_INT               0

如果要使用1.8.4進(jìn)行記錄日志,請取PECL下載1.8.4 SeasLog 源碼包
將源文件/Path/To/SeasLog-1.8.4/seaslog.c中

PHP_MINIT_FUNCTION(seaslog)
{
    ...
    SEASLOG_G(level) = SEASLOG_ALL_INT;//Line 224
}

函數(shù)中添加對日志級別level初始化的操作,然后進(jìn)行編譯安裝

空痕 回答

clipboard.png
應(yīng)該是這一個(gè)吧
定制主題

毀與悔 回答

找到 emoji 的 svg,轉(zhuǎn) base64 放 href 里?

呆萌傻 回答

前端算其實(shí)也比較容易,其實(shí)樓上提出來的那個(gè)問題不大,如果按月來先減,再看日就不會有他說的那種問題。如果日有30天算一個(gè)月,就相對簡單了。

話寡 回答

檢查dns,子網(wǎng)掩碼

別傷我 回答

js中字面量的形式創(chuàng)建對象更為常見,看上去更加清晰,并且性能更好,但是相對而言,其一般適用于較簡單場景,因?yàn)椴荒軡M足多種復(fù)雜情況下的變化需求。
構(gòu)造函數(shù)可以接受不同的參數(shù)以創(chuàng)建相同原型不同屬性的對象,可以將相似類型的生成代碼合并起來,但是這個(gè)也是有好有壞,隨著應(yīng)用場景的變化可能反而極大的增加了理解難度和代碼管理難度。

維她命 回答

F:react-webpack>npm run bulid
npm ERR! missing script: bulid

npm ERR! A complete log of this run can be found in:
npm ERR! C:UsersAdministratorAppDataRoamingnpm-cache_logs2018-03-19T14_36_12_051Z-debug.log

F:react-webpack>

這個(gè)是報(bào)的錯(cuò)誤,這個(gè)配置是錯(cuò)了什么問題呢,請大神看看呢,謝謝

{
"name": "react-webpack",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {

 

 "build": "webpack"

},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {

"webpack": "^4.1.1",
"webpack-cli": "^2.0.12"

}
}

這是json文件的內(nèi)容,這里script里面應(yīng)該怎么修改的呢,謝謝

莓森 回答

組件周期中建立和銷毀,具體還是要看vue-socket.io的文檔

半心人 回答

this.$router.push('/test');
'/test/'可能匹配test下級路由?

落殤 回答
listen 10.0.0.1:8080;
listen 127.0.0.1:8080;
listen 80;
listen *:81;
listen localhost:8000;
listen [::]:8001;
listen [::1];
listen unix:/var/run/nginx.sock;

以上這些都是支持的

瘋浪 回答

那是[].shift.call(a)

也就是a.shift(),主要是arguments不是數(shù)組類型的,所以需要使用call的方式將[].shift()this改為arguments,從而使argumengs能使用shift方法。


function callIt(){
 var fn = [].shift.call(arguments)
 var newArguments = arguments
 return fn.apply(null, newArguments)
}