每個人都喜歡Google趨勢,但是長尾關鍵字有點棘手。 我們都喜歡官方 谷歌趨勢服務 以獲得有關搜索行為的見解。 但是,有兩點阻礙了許多人將其用於堅實的工作。
- 當您需要尋找時 新的利基關鍵字, 那裡 Google趨勢中的數據不足
- 缺少用於向Google趨勢發出請求的官方API:當我們使用諸如 趨勢,那麼我們必須使用代理服務器,否則我們將被阻止。
在本文中,我將分享我們編寫的Python腳本,該腳本可通過Google Autosuggest導出趨勢關鍵字。
隨著時間的推移獲取並存儲自動建議的結果
假設我們有1,000個Seed關鍵字要發送到Google Autosuggest。 作為回報,我們可能會得到大約200,000萬 長尾巴 關鍵字。 然後,我們需要在一周後執行相同的操作,然後比較這些數據集來回答兩個問題:
- 哪些查詢是 新關鍵字 與上次相比? 這可能是我們需要的情況。 Google認為這些查詢變得越來越重要-通過這樣做,我們可以創建自己的Google Autosuggest解決方案!
- 哪些查詢是 關鍵字不再 趨勢?
該腳本非常簡單,我共享的大多數代碼 這裡。 更新後的代碼將保存過去運行的數據,並隨時間比較建議。 為了簡化起見,我們避免使用基於文件的數據庫(如SQLite)–因此,所有數據存儲都在下面使用CSV文件。 這使您可以在Excel中導入文件並探索適合您業務的利基關鍵字趨勢。
利用此Python腳本
- 輸入應發送到自動完成功能的種子關鍵字集:keyword.csv
- 根據需要調整腳本設置:
- 語言:默認為“ en”
- 國家/地區:默認為“我們”
- 安排腳本每週運行一次。 您也可以根據需要手動運行它。
- 使用keyword_suggestions.csv進行進一步分析:
- 初見:這是查詢在自動建議中首次出現的日期
- 最後一次露面:最後一次查詢的日期
- 是新的:如果first_seen == last_seen我們將其設置為 真 –只需過濾此值即可在Google自動建議中獲得新的趨勢搜索。
這是Python代碼
# Pemavor.com Autocomplete Trends
# Author: Stefan Neefischer (stefan.neefischer@gmail.com)
import concurrent.futures
from datetime import date
from datetime import datetime
import pandas as pd
import itertools
import requests
import string
import json
import time
charList = " " + string.ascii_lowercase + string.digits
def makeGoogleRequest(query):
# If you make requests too quickly, you may be blocked by google
time.sleep(WAIT_TIME)
URL="http://suggestqueries.google.com/complete/search"
PARAMS = {"client":"opera",
"hl":LANGUAGE,
"q":query,
"gl":COUNTRY}
response = requests.get(URL, params=PARAMS)
if response.status_code == 200:
try:
suggestedSearches = json.loads(response.content.decode('utf-8'))[1]
except:
suggestedSearches = json.loads(response.content.decode('latin-1'))[1]
return suggestedSearches
else:
return "ERR"
def getGoogleSuggests(keyword):
# err_count1 = 0
queryList = [keyword + " " + char for char in charList]
suggestions = []
for query in queryList:
suggestion = makeGoogleRequest(query)
if suggestion != 'ERR':
suggestions.append(suggestion)
# Remove empty suggestions
suggestions = set(itertools.chain(*suggestions))
if "" in suggestions:
suggestions.remove("")
return suggestions
def autocomplete(csv_fileName):
dateTimeObj = datetime.now().date()
#read your csv file that contain keywords that you want to send to google autocomplete
df = pd.read_csv(csv_fileName)
keywords = df.iloc[:,0].tolist()
resultList = []
with concurrent.futures.ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor:
futuresGoogle = {executor.submit(getGoogleSuggests, keyword): keyword for keyword in keywords}
for future in concurrent.futures.as_completed(futuresGoogle):
key = futuresGoogle[future]
for suggestion in future.result():
resultList.append([key, suggestion])
# Convert the results to a dataframe
suggestion_new = pd.DataFrame(resultList, columns=['Keyword','Suggestion'])
del resultList
#if we have old results read them
try:
suggestion_df=pd.read_csv("keyword_suggestions.csv")
except:
suggestion_df=pd.DataFrame(columns=['first_seen','last_seen','Keyword','Suggestion'])
suggestionCommon_list=[]
suggestionNew_list=[]
for keyword in suggestion_new["Keyword"].unique():
new_df=suggestion_new[suggestion_new["Keyword"]==keyword]
old_df=suggestion_df[suggestion_df["Keyword"]==keyword]
newSuggestion=set(new_df["Suggestion"].to_list())
oldSuggestion=set(old_df["Suggestion"].to_list())
commonSuggestion=list(newSuggestion & oldSuggestion)
new_Suggestion=list(newSuggestion - oldSuggestion)
for suggest in commonSuggestion:
suggestionCommon_list.append([dateTimeObj,keyword,suggest])
for suggest in new_Suggestion:
suggestionNew_list.append([dateTimeObj,dateTimeObj,keyword,suggest])
#new keywords
newSuggestion_df = pd.DataFrame(suggestionNew_list, columns=['first_seen','last_seen','Keyword','Suggestion'])
#shared keywords with date update
commonSuggestion_df = pd.DataFrame(suggestionCommon_list, columns=['last_seen','Keyword','Suggestion'])
merge=pd.merge(suggestion_df, commonSuggestion_df, left_on=["Suggestion"], right_on=["Suggestion"], how='left')
merge = merge.rename(columns={'last_seen_y': 'last_seen',"Keyword_x":"Keyword"})
merge["last_seen"].fillna(merge["last_seen_x"], inplace=True)
del merge["last_seen_x"]
del merge["Keyword_y"]
#merge old results with new results
frames = [merge, newSuggestion_df]
keywords_df = pd.concat(frames, ignore_index=True, sort=False)
# Save dataframe as a CSV file
keywords_df['first_seen'] = pd.to_datetime(keywords_df['first_seen'])
keywords_df = keywords_df.sort_values(by=['first_seen','Keyword'], ascending=[False,False])
keywords_df['first_seen']= pd.to_datetime(keywords_df['first_seen'])
keywords_df['last_seen']= pd.to_datetime(keywords_df['last_seen'])
keywords_df['is_new'] = (keywords_df['first_seen']== keywords_df['last_seen'])
keywords_df=keywords_df[['first_seen','last_seen','Keyword','Suggestion','is_new']]
keywords_df.to_csv('keyword_suggestions.csv', index=False)
# If you use more than 50 seed keywords you should slow down your requests - otherwise google is blocking the script
# If you have thousands of seed keywords use e.g. WAIT_TIME = 1 and MAX_WORKERS = 5
WAIT_TIME = 0.2
MAX_WORKERS = 20
# set the autocomplete language
LANGUAGE = "en"
# set the autocomplete country code - DE, US, TR, GR, etc..
COUNTRY="US"
# Keyword_seed csv file name. One column csv file.
#csv_fileName="keyword_seeds.csv"
CSV_FILE_NAME="keywords.csv"
autocomplete(CSV_FILE_NAME)
#The result will save in keyword_suggestions.csv csv file