在线观看不卡亚洲电影_亚洲妓女99综合网_91青青青亚洲娱乐在线观看_日韩无码高清综合久久

鍍金池/ 問(wèn)答/ C問(wèn)答
孤客 回答

實(shí)現(xiàn)它的 delegate 方法:

- (BOOL)previewController:(QLPreviewController *)controller shouldOpenURL:(NSURL *)url forPreviewItem:(id <QLPreviewItem>)item
{
    if ([url.scheme isEqualToString:@"tel"]) {
        return NO;
    }
    return YES;
}
壞脾滊 回答
  1. 看你的描述我猜測(cè)你是在寫一個(gè)類似nginx的http服務(wù)代理或者竄改器,你想在特定的http請(qǐng)求中添加http頭信息,不知道我對(duì)你要實(shí)現(xiàn)的功能理解是否正確。
  2. 如果是要做http請(qǐng)求修改的話,比較推薦的做法對(duì)http請(qǐng)求進(jìn)行解析,解析到一個(gè)完整的http請(qǐng)求后再修改這個(gè)http請(qǐng)求,然后再轉(zhuǎn)發(fā)給后端其他業(yè)務(wù),如果只是在收包的時(shí)候判斷關(guān)鍵字然后在關(guān)鍵字后添加你要加的頭,這么處理是不完善的因?yàn)槟愕年P(guān)鍵字可能分兩次包被應(yīng)用層接收,簡(jiǎn)單點(diǎn)說(shuō),你的關(guān)鍵字是"User-Agent",那么有可能第一次收包結(jié)尾是"User",第二次收包開(kāi)頭是"-Agent"。
  3. c語(yǔ)言的話推薦使用開(kāi)源的http-parser庫(kù)來(lái)做http請(qǐng)求解析,http-parser托管在GitHub上,傳送門:https://github.com/arnoldlu/h...。
女流氓 回答

你問(wèn)區(qū)別的話,沒(méi)有太大的區(qū)別, 都是云服務(wù)器, 而且都是比較成熟的。
那些深層的、細(xì)微的區(qū)別, 中小企業(yè)以及普通用戶是接觸不到的

黑與白 回答

首先, 把你的代碼圖片換成代碼, 並且用本站自帶的編輯器/markdown語(yǔ)法格式化成代碼格式, 切記, 永遠(yuǎn)不要貼代碼圖片.

至於你的代碼, 有兩個(gè)問(wèn)題:

  1. scanf_smsvc的私貨, 它要求你給出長(zhǎng)度信息, 你沒(méi)有給.
  2. scanf_s/scanf接受的是const char *, 而在這裏你的name的類型是char [4], 即, 是一個(gè)字符數(shù)組, 切忌把字符數(shù)組和指針畫等號(hào), 它們倆是兩個(gè)不同的類型(derived type). 但是, 數(shù)組有時(shí)候能隱式轉(zhuǎn)換成指針, 比如在scanf_s/scanf這裏, 要求的形參(parameter)是指針, 對(duì)吧, 但是你給的實(shí)參(argument)是數(shù)組, 在參數(shù)傳參時(shí), 就會(huì)發(fā)生一個(gè)隱式轉(zhuǎn)換, 從數(shù)組磚到指針. 而如果你傳入的是&name, 那麼你需要明白的是name&name的值雖然是相同的(都是二進(jìn)制數(shù), 可以理解爲(wèi)地址), 但是它們倆的類型是不一樣的, name的類型窩上面說(shuō)過(guò)了, 是char [4], which可以隱式轉(zhuǎn)換成指針, 然而&name的類型其實(shí)是char (*)[4], 他是一個(gè)指向數(shù)組(whose長(zhǎng)度是4)的指針 是無(wú)法隱式轉(zhuǎn)換到const char *, which是scanf/scanf_s要求的類型. 所以引發(fā)了異常.

解決方案:

scanf_s("%s", stu.name, sizeof(stu.name));

簡(jiǎn)體版本:

首先, 把你的代碼圖片換成代碼, 并且用本站自帶的編輯器/markdown語(yǔ)法格式化成代碼格式, 切記, 永遠(yuǎn)不要貼代碼圖片.

至于你的代碼, 有兩個(gè)問(wèn)題:

  1. scanf_smsvc的私貨, 它要求你給出長(zhǎng)度信息, 你沒(méi)有給.
  2. scanf_s/scanf接受的是const char *, 而在這里你的name的類型是char [4], 即, 是一個(gè)字符數(shù)組, 切忌把字符數(shù)組和指針畫等號(hào), 它們倆是兩個(gè)不同的類型(derived type). 但是, 數(shù)組有時(shí)候能隱式轉(zhuǎn)換成指針, 比如在scanf_s/scanf這里, 要求的形參(parameter)是指針, 對(duì)吧, 但是你給的實(shí)參(argument)是數(shù)組, 在參數(shù)傳遞時(shí), 就會(huì)發(fā)生一個(gè)隱式轉(zhuǎn)換, 從數(shù)組磚到指針. 而如果你傳入的是&name, 那么你需要明白的是name&name的值雖然是相同的(都是二進(jìn)制數(shù), 可以理解為地址), 但是它們倆的類型是不一樣的, name的類型窩上面說(shuō)過(guò)了, 是char [4], which可以隱式轉(zhuǎn)換成指針, 然而&name的類型其實(shí)是char (*)[4], 它是一個(gè)指向數(shù)組(whose 長(zhǎng)度是4)的指針, 是無(wú)法隱式轉(zhuǎn)換到const char *, which是scanf/scanf_s要求的類型. 所以引發(fā)了異常.

解決方案:

scanf_s("%s", stu.name, sizeof(stu.name));
心夠野 回答

沒(méi)上線,費(fèi)點(diǎn)力氣還是能改的。
把項(xiàng)目工程的編碼、文件存儲(chǔ)的編碼、過(guò)濾器編碼全部修改一下吧,如果有jsp的話頁(yè)面中encoding的聲明也要改一下。

如果有判斷字段長(zhǎng)度的地方,UTF8是按照三個(gè)字節(jié),GBK是2個(gè)字節(jié),也需要修改。

當(dāng)然,最好的辦法是說(shuō)服客戶,如果是空數(shù)據(jù)庫(kù),客戶不想費(fèi)事修改,給權(quán)限的話你自己動(dòng)手來(lái)吧;但如果和其它數(shù)據(jù)庫(kù)放在一塊就不太可能修改了。

乞許 回答

sql語(yǔ)句中漏寫了id, 抱歉

溫衫 回答

如果打印的是char,應(yīng)該在格式說(shuō)明里加入長(zhǎng)度描述符hh

printf("%hhx\n", c);

更多長(zhǎng)度描述符(hh,h,l,ll ...)參考這里。

失魂人 回答

你沒(méi)有給出你使用的 系統(tǒng)、編譯器等信息,為什么亂碼不太好說(shuō)。

但有一點(diǎn)是可以肯定的:你使用 %e 來(lái)輸出 long double 肯定不對(duì)呀,而應(yīng)該使用 %Le 或者 %LE,因?yàn)?%e 對(duì)應(yīng)的是 double 吶

陌離殤 回答

i<=h 應(yīng)該是 i<h, 數(shù)組是從0開(kāi)始。

舊言 回答

css是無(wú)法操作html元素的,可以在輸出html(php或js模版生成)頁(yè)面時(shí)在input標(biāo)簽上加上readonly屬性??梢圆榭磆tml教程( http://www.pahei8.com )參考什么是html。

久礙你 回答

多看看官方文檔吧,還是很詳細(xì)的

練命 回答
import pandas as pd

def csv_to_xlsx_pd(csv_pt, encoding='utf-8'):
    csv = pd.read_csv(csv_pt, encoding=encoding)
    csv.to_excel(csv_pt.split('.')[0]+'.xlsx', sheet_name='data')

# TODO 批量處理目錄下的文件 os.listdir

if __name__ == '__main__':
    csv_to_xlsx_pd()
愚念 回答

啥意思?[[UIApplication sharedApplication] openURL:[NSURL URLWithString:@"http://www.google.com"]];打開(kāi)網(wǎng)址搜索嗎? 系統(tǒng)搜索功能是啥?

不討囍 回答

csdn上面的,直接搬了過(guò)來(lái):

因?yàn)橐鲇^點(diǎn),觀點(diǎn)的屋子類似于知乎的話題,所以得想辦法把他給爬下來(lái),搞了半天最終還是妥妥的搞定了,代碼是python寫的,不懂得麻煩自學(xué)哈!懂得直接看代碼,絕對(duì)可用


#coding:utf-8
"""
@author:haoning
@create time:2015.8.5
"""
from __future__ import division  # 精確除法
from Queue import Queue
from __builtin__ import False
import json
import os
import re
import platform
import uuid
import urllib
import urllib2
import sys
import time
import MySQLdb as mdb
from bs4 import BeautifulSoup


reload(sys)
sys.setdefaultencoding( "utf-8" )


headers = {
   'User-Agent' : 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:35.0) Gecko/20100101 Firefox/35.0',
   'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8',
   'X-Requested-With':'XMLHttpRequest',
   'Referer':'https://www.zhihu.com/topics',
   'Cookie':'__utma=51854390.517069884.1416212035.1416212035.1416212035.1; q_c1=c02bf44d00d240798bfabcfc95baeb56|1455778173000|1416205243000; _za=b1c8ae35-f986-46a2-b24a-cb9359dc6b2a; aliyungf_tc=AQAAAJ1m71jL1woArKqF22VFnL/wRy6C; _xsrf=9d494558f9271340ab24598d85b2a3c8; cap_id="MDNiMjcwM2U0MTRhNDVmYjgxZWVhOWI0NTA2OGU5OTg=|1455864276|2a4ce8247ebd3c0df5393bb5661713ad9eec01dd"; n_c=1; _alicdn_sec=56c6ba4d556557d27a0f8c876f563d12a285f33a'
}


DB_HOST = '127.0.0.1'
DB_USER = 'root'
DB_PASS = 'root'


queue= Queue() #接收隊(duì)列
nodeSet=set()
keywordSet=set()
stop=0
offset=-20
level=0
maxLevel=7
counter=0
base=""


conn = mdb.connect(DB_HOST, DB_USER, DB_PASS, 'zhihu', charset='utf8')
conn.autocommit(False)
curr = conn.cursor()


def get_html(url):
    try:
        req = urllib2.Request(url)
        response = urllib2.urlopen(req,None,3) #在這里應(yīng)該加入代理
        html = response.read()
        return html
    except:
        pass
    return None


def getTopics():
    url = 'https://www.zhihu.com/topics'
    print url
    try:
        req = urllib2.Request(url)
        response = urllib2.urlopen(req) #鍦ㄨ繖閲屽簲璇ュ姞鍏ヤ唬鐞?
        html = response.read().decode('utf-8')
        print html
        soup = BeautifulSoup(html)
        lis = soup.find_all('li', {'class' : 'zm-topic-cat-item'})
        
        for li in lis:
            data_id=li.get('data-id')
            name=li.text
            curr.execute('select id from classify_new where name=%s',(name))
            y= curr.fetchone()
            if not y:
                curr.execute('INSERT INTO classify_new(data_id,name)VALUES(%s,%s)',(data_id,name))
        conn.commit()
    except Exception as e:
        print "get topic error",e
        


def get_extension(name):  
    where=name.rfind('.')
    if where!=-1:
        return name[where:len(name)]
    return None




def which_platform():
    sys_str = platform.system()
    return sys_str


def GetDateString():
    when=time.strftime('%Y-%m-%d',time.localtime(time.time()))
    foldername = str(when)
    return foldername 


def makeDateFolder(par,classify):
    try:
        if os.path.isdir(par):
            newFolderName=par + '//' + GetDateString() + '//'  +str(classify)
            if which_platform()=="Linux":
                newFolderName=par + '/' + GetDateString() + "/" +str(classify)
            if not os.path.isdir( newFolderName ):
                os.makedirs( newFolderName )
            return newFolderName
        else:
            return None 
    except Exception,e:
        print "kk",e
    return None 


def download_img(url,classify):
    try:
        extention=get_extension(url)
        if(extention is None):
            return None
        req = urllib2.Request(url)
        resp = urllib2.urlopen(req,None,3)
        dataimg=resp.read()
        name=str(uuid.uuid1()).replace("-","")+"_www.guandn.com"+extention
        top="E://topic_pic"
        folder=makeDateFolder(top, classify)
        filename=None
        if folder is not None:
            filename  =folder+"http://"+name
        try:
            if "e82bab09c_m" in str(url):
                return True
            if not os.path.exists(filename):
                file_object = open(filename,'w+b')
                file_object.write(dataimg)
                file_object.close()
                return '/room/default/'+GetDateString()+'/'+str(classify)+"/"+name
            else:
                print "file exist"
                return None
        except IOError,e1:
            print "e1=",e1
            pass
    except Exception as e:
        print "eee",e
        pass
    return None #如果沒(méi)有下載下來(lái)就利用原來(lái)網(wǎng)站的鏈接


def getChildren(node,name):
    global queue,nodeSet
    try:
        url="https://www.zhihu.com/topic/"+str(node)+"/hot"
        html=get_html(url)
        if html is None:
            return
        soup = BeautifulSoup(html)
        p_ch='父話題'
        node_name=soup.find('div', {'id' : 'zh-topic-title'}).find('h1').text
        topic_cla=soup.find('div', {'class' : 'child-topic'})
        if topic_cla is not None:
            try:
                p_ch=str(topic_cla.text)
                aList = soup.find_all('a', {'class' : 'zm-item-tag'}) #獲取所有子節(jié)點(diǎn)
                if u'子話題' in p_ch:
                    for a in aList:
                        token=a.get('data-token')
                        a=str(a).replace('\n','').replace('\t','').replace('\r','')
                        start=str(a).find('>')
                        end=str(a).rfind('</a>')
                        new_node=str(str(a)[start+1:end])
                        curr.execute('select id from rooms where name=%s',(new_node)) #先保證名字絕不相同
                        y= curr.fetchone()
                        if not y:
                            print "y=",y,"new_node=",new_node,"token=",token
                            queue.put((token,new_node,node_name))
            except Exception as e:
                print "add queue error",e
    except Exception as e:
        print "get html error",e
        
    


def getContent(n,name,p,top_id):
    try:
        global counter
        curr.execute('select id from rooms where name=%s',(name)) #先保證名字絕不相同
        y= curr.fetchone()
        print "exist?? ",y,"n=",n
        if not y:
            url="https://www.zhihu.com/topic/"+str(n)+"/hot"
            html=get_html(url)
            if html is None:
                return
            soup = BeautifulSoup(html)
            title=soup.find('div', {'id' : 'zh-topic-title'}).find('h1').text
            pic_path=soup.find('a',{'id':'zh-avartar-edit-form'}).find('img').get('src')
            description=soup.find('div',{'class':'zm-editable-content'})
            if description is not None:
                description=description.text
                
            if (u"未歸類" in title or u"根話題" in title): #允許入庫(kù),避免死循環(huán)
                description=None
                
            tag_path=download_img(pic_path,top_id)
            print "tag_path=",tag_path
            if (tag_path is not None) or tag_path==True:
                if tag_path==True:
                    tag_path=None
                father_id=2 #默認(rèn)為雜談
                curr.execute('select id from rooms where name=%s',(p))
                results = curr.fetchall()
                for r in results:
                    father_id=r[0]
                name=title
                curr.execute('select id from rooms where name=%s',(name)) #先保證名字絕不相同
                y= curr.fetchone()
                print "store see..",y
                if not y:
                    friends_num=0
                    temp = time.time()
                    x = time.localtime(float(temp))
                    create_time = time.strftime("%Y-%m-%d %H:%M:%S",x) # get time now
                    create_time
                    creater_id=None
                    room_avatar=tag_path
                    is_pass=1
                    has_index=0
                    reason_id=None  
                    #print father_id,name,friends_num,create_time,creater_id,room_avatar,is_pass,has_index,reason_id
                    ######################有資格入庫(kù)的內(nèi)容
                    counter=counter+1
                    curr.execute("INSERT INTO rooms(father_id,name,friends_num,description,create_time,creater_id,room_avatar,is_pass,has_index,reason_id)VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)",(father_id,name,friends_num,description,create_time,creater_id,room_avatar,is_pass,has_index,reason_id))
                    conn.commit() #必須時(shí)時(shí)進(jìn)入數(shù)據(jù)庫(kù),不然找不到父節(jié)點(diǎn)
                    if counter % 200==0:
                        print "current node",name,"num",counter
    except Exception as e:
        print "get content error",e       


def work():
    global queue
    curr.execute('select id,node,parent,name from classify where status=1')
    results = curr.fetchall()
    for r in results:
        top_id=r[0]
        node=r[1]
        parent=r[2]
        name=r[3]
        try:
            queue.put((node,name,parent)) #首先放入隊(duì)列
            while queue.qsize() >0:
                n,p=queue.get() #頂節(jié)點(diǎn)出隊(duì)
                getContent(n,p,top_id)
                getChildren(n,name) #出隊(duì)內(nèi)容的子節(jié)點(diǎn)
            conn.commit()
        except Exception as e:
            print "what's wrong",e  
            
def new_work():
    global queue
    curr.execute('select id,data_id,name from classify_new_copy where status=1')
    results = curr.fetchall()
    for r in results:
        top_id=r[0]
        data_id=r[1]
        name=r[2]
        try:
            get_topis(data_id,name,top_id)
        except:
            pass




def get_topis(data_id,name,top_id):
    global queue
    url = 'https://www.zhihu.com/node/TopicsPlazzaListV2'
    isGet = True;
    offset = -20;
    data_id=str(data_id)
    while isGet:
        offset = offset + 20
        values = {'method': 'next', 'params': '{"topic_id":'+data_id+',"offset":'+str(offset)+',"hash_id":""}'}
        try:
            msg=None
            try:
                data = urllib.urlencode(values)
                request = urllib2.Request(url,data,headers)
                response = urllib2.urlopen(request,None,5)
                html=response.read().decode('utf-8')
                json_str = json.loads(html)
                ms=json_str['msg']
                if len(ms) <5:
                    break
                msg=ms[0]
            except Exception as e:
                print "eeeee",e
            #print msg
            if msg is not None:
                soup = BeautifulSoup(str(msg))
                blks = soup.find_all('div', {'class' : 'blk'})
                for blk in blks:
                    page=blk.find('a').get('href')
                    if page is not None:
                        node=page.replace("/topic/","") #將更多的種子入庫(kù)
                        parent=name
                        ne=blk.find('strong').text
                        try:
                            queue.put((node,ne,parent)) #首先放入隊(duì)列
                            while queue.qsize() >0:
                                n,name,p=queue.get() #頂節(jié)點(diǎn)出隊(duì)
                                size=queue.qsize()
                                if size > 0:
                                    print size
                                getContent(n,name,p,top_id)
                                getChildren(n,name) #出隊(duì)內(nèi)容的子節(jié)點(diǎn)
                            conn.commit()
                        except Exception as e:
                            print "what's wrong",e  
        except urllib2.URLError, e:
            print "error is",e
            pass 
            
        
if __name__ == '__main__':
    i=0
    while i<400:
        new_work()
        i=i+1

說(shuō)下數(shù)據(jù)庫(kù)的問(wèn)題,我這里就不傳附件了,看字段自己建立,因?yàn)檫@確實(shí)太簡(jiǎn)單了,我是用的mysql,你看自己的需求自己建。

有什么不懂得麻煩去去轉(zhuǎn)盤網(wǎng)找我,因?yàn)檫@個(gè)也是我開(kāi)發(fā)的,上面會(huì)及時(shí)更新qq群號(hào),這里不留qq號(hào)啥的,以免被系統(tǒng)給K了。

脾氣硬 回答

前面你的類型是 char,說(shuō)明是一個(gè)字符,C++/C里面操作對(duì)象有字符
如果想表示一個(gè)字符串的話
用const char * a = "A";

耍太極 回答

React.Component是以ES6的形式來(lái)創(chuàng)建react的組件的,是React目前極為推薦的創(chuàng)建有狀態(tài)組件的方式,最終會(huì)取代React.createClass形式。

React.createClass與React.Component區(qū)別有一條就是函數(shù)this自綁定:
React.createClass創(chuàng)建的組件,其每一個(gè)成員函數(shù)的this都有React自動(dòng)綁定,任何時(shí)候使用,直接使用this.method即可,函數(shù)中的this會(huì)被正確設(shè)置。

情殺 回答

可以用正則轉(zhuǎn)換
如圖, 點(diǎn)擊一下 "使用正則表達(dá)式"
圖片描述

上面輸入([a-z]+)n*替換欄輸入'$1', (包括引號(hào)和空格)
最后一個(gè)(z)的逗號(hào)不好替換, 可以再處理