Starting threads in a for cycle produces multiple results - python

-I found the problem!- In function SendMessage I was using UserID (with capital letters) instead of userid (which was the actual parameter passed to each thread). So Python printed the UserID of the for cycle instead of the "individual" userid passed to the different functions. It was only a logging problem, the program sent messages correctly.
I have a for that loops through the elements of a user's list. Each iteration, I would like to start a separate background thread to send a message to that user. By saying "send a message" I mean a simple POST request made using the requests Python lib. At the end of the thread, an output on the console is written. Every 24 requests (so every 24 threads) the app needs to stop for about a second.
Success = 0
Bounces = 0
def SendMessage(botid, token, userid, messageid, tag):
global Success
global Bounces
try:
payload = {...}
r = requests.post("...", params=payload, headers=head, timeout=2)
#problem with request?
pjson = json.loads(r.text)
if r.status_code != 200:
log(str(r.status_code) + " " + pjson["result"] + " UserID: " + UserID + "; URL: " + "..." + BotID + "/users/" + UserID + "/send; Params: " + str(payload))
Bounces += 1
return
Success += 1
return
except requests.exceptions.Timeout:
#wait for connection to be available again!
while not conn_available():
print("... Waiting for a new connection...")
time.sleep(10)
log("Request timed out. UserID: " + UserID + "; URL: " + "..." + BotID + "/users/" + UserID + "/send; Params: " + str(payload))
Bounces += 1
except requests.exceptions.ConnectionError:
log("Unable to connect. UserID: " + UserID + "; URL: " + "..." + BotID + "/users/" + UserID + "/send; Params: " + str(payload))
Bounces += 1
except requests.exceptions.HTTPError:
log("Invalid request. UserID: " + UserID + "; URL: " + "..." + BotID + "/users/" + UserID + "/send; Params: " + str(payload))
Bounces += 1
except requests.exceptions.RequestException:
log("Invalid request. UserID: " + UserID + "; URL: " + "..." + BotID + "/users/" + UserID + "/send; Params: " + str(payload))
Bounces += 1
while True:
newMsgsReq = ""
try:
#Check for new messages
newMsgsReq = requests.get("...", timeout=2)
if newMsgsReq.text == "false":
#exit sub
time.sleep(2)
continue
except requests.exceptions.HTTPError as errh:
log("Request has failed: There was an error in the request: [" + str(errh) + "]")
time.sleep(2)
continue
except requests.exceptions.ConnectionError as errc:
log("Request has failed: check internet connection & retry: [" + str(errc) + "]")
time.sleep(2)
continue
except requests.exceptions.Timeout as errt:
log("Request has failed: check internet connection & retry: [" + str(errt) + "]")
time.sleep(2)
continue
except requests.exceptions.RequestException as err:
log("Request has failed: There was an error in the request: [" + str(err) + "]")
time.sleep(2)
continue
#we have a message!!!
#Extract BotID, Token, MessageID
msgInf = newMsgsReq.text.split("|")
MessageID = msgInf[0]
BotID = msgInf[1]
Token = msgInf[2]
Tag = msgInf[3]
del msgInf[0:4]
suc("New message found: " + str(MessageID))
suc("Total recipients: " + str(len(msgInf)))
#Begin send!
Cycles = 0
TotCycles = 0
#Loop through msgInf
for UserID in msgInf:
#Create the thread.
process = threading.Thread(target=SendMessage, args=[BotID, Token, UserID, MessageID, Tag])
process.start()
TotCycles += 1
pb.print_progress_bar(TotCycles)
Cycles += 1
if Cycles == 24:
time.sleep(1)
Cycles = 0
suc("Message " + str(MessageID) + " sent successfully (" + str(Success) + " success, " + str(Bounces) + " bounces")
Success = 0
Bounces = 0
time.sleep(3)
Let's say my user list is:
{1, 2, 3, 4, ..., 24, 25, ...}. I expect my application to output:
1. Message 1 sent successfully...
2. Message 2 sent successfully...
...
24. Message 24 sent successfully.
Instead, I am getting this output:
1. Message 1 sent successfully.
2. Message 1 sent successfully.
...
24. Message 1 sent successfully.
So all the 24 outputs are related to the first of the 24 ids. It seems like the for loop does not proceed...

This prints the incremented counter without any trouble so I think you may need to provide all of the code and some sample input.
import threading
import time
def SendMessage(userid):
print(userid)
while True:
cycles = 1
for user_id in [1, 2, 3]:
process = threading.Thread(target=SendMessage, args=[user_id])
process.start()
cycles += 1
if cycles == 24:
time.sleep(1)
cycles = 0
time.sleep(3)
Run it on repl.it

Related

Python-telegram-bot bypass flood and 429 error using Scrapy

I follow the price drops on the target site. If there is a price decrease in accordance with the rules I have set, it is recorded in the notificate table. From there, a telegram notification is sent through the code I created in the pipelines.py file.
Sometimes the target site discounts too many products and 200 products can come from the notificate table. I'm stuck with telegram's flood protection while sending these.
Things I've tried:
1- Rotating multiple tokens
2- add sleep time
However, I cannot say that I was successful. I'm still stuck with the flood barrier.
What I want to do here is;
No matter how many notifications come from the Notificate table, queue them and send these notifications in a way that does not exceed 20 messages per second.
How can I do that.
My pipeline.py code:
def sendnotifications(self, token):
cursor = self.cnx.cursor()
req = requests
cursor.execute("SELECT * FROM notificate WHERE token= '"+token+"'")
notifications = cursor.fetchall()
for notification in notifications:
print(notification)
productid = notification[1]
url = notification[3]
name = notification[2]
old = notification[4]
new = notification[5]
price_difference = old - new
percentage = price_difference / old
percentage_str = str("%.2f" % (percentage * 100))
message = "<b>" + name + "</b>" + "\n\n" + \
str(old) + " TL >>>> " + \
str(new) + f" TL - {percentage_str}%" + "\n\n" + \
url + "\n\n" + \
if str(old) == "1.00" or str(old) == "2.00":
message = "<b>" + name + "</b>" + "\n\n" + \
"<b>" + str(new) + " TL\n\n" + "</b>" + \
url + "\n\n" + \
token_list = [
"xxxxxxxxxxxxxxxxxxxxxxx",
"yyyyyyyyyyyyyyyyyyyyyyyy",
"zzzzzzzzzzzzzzzzzzzzzzzzzzz",
"aaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
"ccccccccccccccccccccccccccccccccc",
]
TOKEN = token_list[random.randint(0, len(token_list)-1)]
chat_id = "-100xxxxxxxxxxxxx"
bot = telegram.Bot(token=TOKEN)
# tel_url = bot.sendMessage(chat_id = chat_id, text = message, parse_mode=ParseMode.HTML)
try:
bot.sendMessage(chat_id = chat_id, text = message, parse_mode=ParseMode.HTML)
sleep(0.05)
except Exception:
return False
cursor.close()
return True
The obvious way seems to be to chunk the updates into batches of 20 and sleep for more than 1 second between them:
# ...
notifications = cursor.fetchall()
cursor.close()
for i in range(0, len(notifications), 20):
chunk = notifications[i:i+20]
for notification in chunk:
print(notification)
# ...
time.sleep(1.5)

Using data from API in subsequent API calls

I should preface this with I am not a programmer and most of this code was not written by me. I unfortunately have a need and am trying to hack my way through this.
What I am trying to do is chain a few API calls together to ultimately get a list of IPs. What this script does is queries the API and pulls (and prints) a list of device IDs. The device IDs look like this:
akdjlfijoaidjfod
g9jkidfjlskdjf44
3jdhfj4hf9dfiiu4
The device IDs then need to be passed as a parameter in the next API call like this:
https://api.example.com/devices/entities/devices/v1?ids=akdjlfijoaidjfod&ids=g9jkidfjlskdjf44&ids=3jdhfj4hf9dfiiu4 and so on.
I dont know where to begin. Instead of printing the asset ids, I assume they should be stored as a parameter (or variable) and then appended to the URL. I tried doing that with "ID_LIST" but that didnt seem to work. Can you guys point me in the right direction?
import requests
import json
# Define API REST paths
BASE_URL = "https://api.example.com/"
OAUTH_URL_PART = "oauth2/token"
DEVICE_SEARCH = "devices/queries/devices/v1"
DEVICE_DETAILS = "devices/entities/devices/v1"
# Empty auth token to hold value for subsequent request
auth_Token = ""
# Section 1 - Authenticate to Example OAUTH
# Build a dictionary to hold the headers
headers = {
'Content-type': 'application/x-www-form-urlencoded',
'accept': 'application/json'
}
# Build a dictionary to holds the authentication data to be posted to get a token
auth_creds = {}
auth_creds['client_id'] = "<client_id>"
auth_creds['client_secret'] = "<client_secret>"
auth_creds['grant_type'] = "client_credentials"
# Call the API to get a Authentication token - NOTE the authentication creds
print("Requesting token from " + BASE_URL + OAUTH_URL_PART)
auth_response = requests.post(BASE_URL + OAUTH_URL_PART,data=auth_creds, headers=headers)
# Check if successful
if auth_response.status_code != 201:
# Output debug information
print("\n Return Code: " + str(auth_response.status_code) + " " + auth_response.reason)
print("Path: " + auth_response.request.path_url)
print("Headers: ")
print(auth_response.request.headers)
print("Body: " + auth_response.request.body)
print("\n")
print("Trace_ID: " + auth_response.json()['meta']['trace_id'])
else:
# Section 2 - Capture OAUTH token and store in headers for later use
print("Token Created")
# Capture the auth token for reuse in subsequent calls, by pulling it from the response
# Note this token can be reused multiple times until it expires after 30 mins
auth_Token = auth_response.json()['access_token']
headers = {
'authorization':'bearer ' + auth_Token,
'accept': 'application/json'
}
# Section 3 - Reuse authentication token to call other Example OAUTH2 APIs
# Build parameter dictionary
call_params = {}
call_params['offset'] ="" # Non-mandatory param
call_params['limit'] ="5000" # The number of results
call_params['sort'] ="" #
call_params['filter'] ="" # To exclude devices
# Call devices API
print("Searching Asset ID by getting from " + BASE_URL + DEVICE_SEARCH)
DEVICE_search_response = requests.get(BASE_URL + DEVICE_SEARCH,params=call_params,headers=headers)
#DEVICE_DETAILS_response = request.get(BASE_URL + DEVICE_DETAILS,headers=headers)
# Check for errors
if DEVICE_search_response.status_code != 200:
# Output debug information
print("\n Return Code: " + str(DEVICE_search_response.status_code) + " " + DEVICE_search_response.reason)
print("Path: " + DEVICE_search_response.request.path_url)
print("Headers: ")
print(DEVICE_search_response.request.headers)
print("Body: " + DEVICE_search_response.request.body)
print("\n")
print("Trace_ID: " + DEVICE_search_response.json()['meta']['trace_id'])
else:
# Iterate the results and print
result = DEVICE_search_response.json()
print("DEVICE found on " + str(len(result['resources'])) + " the following device id:")
for devices in result['resources']:
print(devices)
###########Part that is not working###########
DEVICE_DETAILS_response = requests.get(BASE_URL + DEVICE_DETAILS,headers=headers)
#ID_LIST = str(len(result['resources']).replace(",", "&ids=")
if DEVICE_DETAILS_response.status_code != 200:
# Output debug information
print("\n Return Code: " + str(DEVICE_DETAILS_response.status_code) + " " + DEVICE_DETAILS_response.reason)
print("Path: " + DEVICE_DETAILS_response.request.path_url)
print("Headers: ")
print(DEVICE_DETAILS_response.request.headers)
print("Body: " + DEVICE_DETAILS_response.request.body)
print("\n")
print("Trace_ID: " + DEVICE_DETAILS_response.json()['meta']['trace_id'])
else:
result = DEVICE_DETAILS_response.json()
print("Device Details Found")
for details in result['resources']:
print(details)
Hi to convert the strings in result['resources']:
['akdjlfijoaidjfod',
'g9jkidfjlskdjf44',
'3jdhfj4hf9dfiiu4']
to : https://api.example.com/devices/entities/devices/v1?ids=akdjlfijoaidjfod&ids=g9jkidfjlskdjf44&ids=3jdhfj4hf9dfiiu4
try this funciton:
def get_modified_url(mylist, myurl):
url = myurl + '?'
for idx, b in enumerate(mylist): # enumerate list to get index and element in the list
if idx > 0:
url += '&ids=' + b # append &ids= to url if not first device id
else:
url += 'ids=' + b # append ids= to url if first device id
return url
print(get_modified_url(result['resources'], BASE_URL + DEVICE_DETAILS ))
full code would be:
def get_modified_url(mylist, myurl):
url = myurl + '?'
for idx, b in enumerate(mylist): # enumerate list to get index and element in the list
if idx > 0:
url += '&ids=' + b # append &ids= to url if not first device id
else:
url += 'ids=' + b # append ids= to url if first device id
return url
device_list = []
DEVICE_search_response = requests.get(BASE_URL + DEVICE_SEARCH,params=call_params,headers=headers)
# Check for errors
if DEVICE_search_response.status_code != 200:
# Output debug information
print("\n Return Code: " + str(DEVICE_search_response.status_code) + " " + DEVICE_search_response.reason)
print("Path: " + DEVICE_search_response.request.path_url)
print("Headers: ")
print(DEVICE_search_response.request.headers)
print("Body: " + DEVICE_search_response.request.body)
print("\n")
print("Trace_ID: " + DEVICE_search_response.json()['meta']['trace_id'])
else:
# Iterate the results and print
result = DEVICE_search_response.json()
print("DEVICE found on " + str(len(result['resources'])) + " the following device id:")
for devices in result['resources']:
print(devices)
device_list.append(devices)
new_url = get_modified_url(device_list, BASE_URL + DEVICE_DETAILS )
DEVICE_DETAILS_response = requests.get(new_url, headers=headers)
if DEVICE_DETAILS_response.status_code != 200:
# Output debug information
print("\n Return Code: " + str(DEVICE_DETAILS_response.status_code) + " " + DEVICE_DETAILS_response.reason)
print("Path: " + DEVICE_DETAILS_response.request.path_url)
print("Headers: ")
print(DEVICE_DETAILS_response.request.headers)
print("Body: " + DEVICE_DETAILS_response.request.body)
print("\n")
print("Trace_ID: " + DEVICE_DETAILS_response.json()['meta']['trace_id'])
else:
result = DEVICE_DETAILS_response.json()
print("Device Details Found")
for details in result['resources']:
print(details)

python multithreading in loops

I think I have a very basic understanding of how threading works, but since I don't know that much I can't figure this out. I want to have a pool limit of about 10 threads, but the tricky part is I don't know how to make it read line by line.
proxies = {
'http': 'http://123.10.210.213:9999',
'https': 'http://123.10.210.213:9999'
}
def create_proxy_lst(txt):
print("""
########################################
# WORKING | NOT WORKING #
########################################
""")
proxy_list = []
with open(txt) as f:
for line in f:
proxy_list.append(line.strip('\n'))
return proxy_list
def check_proxy(website="https://google.com/"):
working = 0
not_working = 0
total = 0
lst = create_proxy_lst("uncheckedproxys.txt")
for proxy in lst:
try:
proxies["https"] = "http://" + proxy
proxies["http"] = "http://" + proxy
r = requests.get(website, timeout=1, proxies=proxies)
if r.status_code == 200:
print("%s" % proxy)
working += 1
total += 1
os.system("title Working: " + str(working) + "\t Not working " + str(not_working) + " ✔" + " Total: " + str(total) + "/" + str(len(lst)))
except Exception:
print("\t\t %s" % proxy)
not_working += 1
total += 1
os.system("title Working: " + str(working) + "\t Not working " + str(not_working) + " ✖" + " Total: " + str(total) + "/" + str(len(lst)))
Put your proxies into a Queue.Queue(), and then start 10 threads to read proxy from the queue.
In your case:
from Queue import Queue
from threading import Thread
def worker(proxy_queue):
while not proxy_queue.empty():
proxy = proxy_queue.get()
working = 0
not_working = 0
total = 0
try:
proxies["https"] = "http://" + proxy
proxies["http"] = "http://" + proxy
r = requests.get(website, timeout=1, proxies=proxies)
if r.status_code == 200:
print("%s" % proxy)
working += 1
total += 1
os.system("title Working: " + str(working) + "\t Not working " + str(not_working) + " ✔" + " Total: " + str(total) + "/" + str(len(lst)))
except Exception:
print("\t\t %s" % proxy)
not_working += 1
total += 1
os.system("title Working: " + str(working) + "\t Not working " + str(not_working) + " ✖" + " Total: " + str(total) + "/" + str(len(lst)))
if __name__ == '__main__':
# Build a queue
proxy_queue = Queue()
# Put these proxies into the queue
with open("uncheckedproxys.txt") as f:
for line in f:
proxy_queue.put(line.strip())
# Create thread pool
thread_pool = [Thread(target=worker, args=proxy_queue) for i in range(10)]
# Start threads
for thread in thread_pool:
thread.start()

How to end a server socket listen

So what i'm trying to do is to make a multi-threaded server that creates threads for each client that connects to it, and replies back the string sent from the client.
It sort of works, but my server doesn't actually end properly. My KerboardInterrupt catch doesn't seem to work in windows command prompt, only thing that lets me exit the process would be ctrl + pause/break. Can anyone help me think of a way to get the server to end gracefully?
Server Code:
import socket
import threading
import time
import datetime
import sys
def getTime():
ts = time.time()
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m- %d_%H:%M:%S')
return timeStamp
def ThreadFunction(clientsocket, clientaddr):
global ReceivedData
global SentData
while True:
#Receive data from client
data = clientsocket.recv(bufferSize)
#Get client IP and port
clientIP, clientSocket = clientsocket.getpeername()
#Add to total amount of data transfered
ReceiveDataSize = len(data)
ReceivedData += ReceiveDataSize
#LOg the received data
text_file.write(str(getTime()) + "__ Size of data received (" + clientIP + ":" + str(clientSocket) + ") = " + str(ReceiveDataSize) + '\n')
#Send data
clientsocket.send(data)
SentDataSize = len(data)
SentData += SentDataSize
#Log the sent data
text_file.write(str(getTime()) + "__ Size of data sent (" + clientIP + ":" + str(clientSocket) + ") = " + str(SentDataSize) + '\n')
def Close(counter, ReceivedData, SentData):
print ("Shutting down Server...")
serversocket.close()
text_file.write("\n\nTotal # of connections: " + str(counter))
text_file.write("\nTotal data received: " + str(ReceivedData))
text_file.write("\nTotal data sent: " + str(SentData))
text_file.close()
sys.exit()
if __name__ == '__main__':
serverIP = raw_input('Enter your server IP \n')
port = int(raw_input('What port would you like to use?\n'))
# Maintain how many connections
connections = []
counter = 0
# Maintain amount of data sent to and from server
ReceivedData = 0
SentData = 0
bufferSize = 1024
# Create and initialize the text file with the date in the filename in the logfiles directory
text_file = open("MultiThreadedServerLog.txt", "w")
address = (serverIP, port)
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Bind server to port
serversocket.bind(address)
# The listen backlog queue size
serversocket.listen(50)
print ("Server is listening for connections\n")
try:
while 1:
# Accept client connections, increment number of connections
clientsocket, clientaddr = serversocket.accept()
counter += 1
# Log client information
print (str(clientaddr) + " : " + " Just Connected. \n Currently connected clients: " + str(counter) + "\n")
text_file.write(str(getTime()) + " - " + str(clientaddr) + " : " + " Just Connected. \n Currently connected clients: " + str(counter) + "\n")
clientThread = threading.Thread(target=ThreadFunction, args=(clientsocket, clientaddr))
clientThread.start()
except KeyboardInterrupt:
print ("Keyboard interrupt occurred.")
Close(counter, ReceivedData, SentData)
Client Code:
from socket import *
import threading
import time
import random
import sys
import datetime
serverIP = ""
port = 8005
message = ""
msgMultiple = 1
def getTime():
ts = time.time()
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d_%H:%M:%S')
return timeStamp
def run(clientNumber):
buffer = 1024
global totalTime
s = socket(AF_INET, SOCK_STREAM)
s.connect((serverIP, port))
threadRTT = 0
while 1:
for _ in range(msgMultiple):
cData = message + " From: Client " + str(clientNumber)
# Start timer and send data
start = time.time()
s.send(cData.encode('utf-8'))
print "Sent: " + cData
# Stop timer when data is received
sData = s.recv(buffer)
end = time.time()
# Keep track of RTT and update total time
response_time = end - start
threadRTT += end - start
totalTime += response_time
print "Received: " + cData + '\n'
t = random.randint(0, 9)
time.sleep(t)
# Log information of Client
text_file.write(
"\nClient " + str(clientNumber) + " RTT time taken for " + str(msgMultiple) + " messages was: " + str(
threadRTT) + " seconds.")
threadRTT = 0
break
if __name__ == '__main__':
serverIP = raw_input('Enter the server IP: ')
port = int(input('Enter the port: '))
clients = int(input('Enter number of clients: '))
message = raw_input('Enter a message to send: ')
msgMultiple = int(input('Enter the number of times you would like to send the message: '))
# Initialize Log file
text_file = open("ClientLog.txt", "w")
# Used to maintain list of all running threads
threads = []
totalTime = 0
# Create a seperate thread for each client
for x in range(clients):
thread = threading.Thread(target=run, args=[x])
thread.start()
threads.append(thread)
for thread in threads:
thread.join()
# Calculations for log data
bytes = sys.getsizeof(message)
totalRequests = clients * msgMultiple
totalBytes = totalRequests * bytes
averageRTT = totalTime / totalRequests
# Output data
print("Bytes sent in message was : " + str(bytes))
print("Total Data sent was : " + str(totalBytes) + " Bytes.")
print("Average RTT was : " + str(averageRTT) + " seconds.")
print("Requests was : " + str(totalRequests))
# Write data to log file
text_file.write("\n\n Bytes sent in message was : " + str(bytes))
text_file.write("\nTotal Data sent was : " + str(totalBytes) + " Bytes.")
text_file.write("\nAverage RTT was : " + str(averageRTT) + " seconds.")
text_file.write("\nRequests was : " + str(totalRequests))
Also if anyone else has any general improvements they would add to this code, let me know. I'm still pretty new at python, and still rough at it.
Here is the normal intended input i'm getting from my server.
But when it gets to the last client that connects, it starts to drag on for some reason.
The last picture, the inputs go on for the majority of the text file, for a very long time. Seems like something isn't ending properly.
Solved by adding an if statement that checks for a byte < 0 , if it is, end the socket.

Python: String indices must be integers

I'm working on some Python code to automate github merge requests.
I found the following code below. When I run this, I get TypeError: string indices must be integers.
I've found several threads on here refrencing this error, but I'm not quit sure how to implement the fixes in the code.
#!/usr/bin/env python
import json
import requests
import datetime
OAUTH_KEY = "xxxxxxxxxxxx"
repos = ['my_app'] # Add all repo's you want to automerged here
ignore_branches = ['master', 'release', 'staging', 'development'] # Add 'master' here if you don't want to automerge into master
# Print merge/no-merge message to logfile
def print_message(merging):
if merging == True:
message = "Merging: "
else:
message = "Not merging: "
print message + str(pr_id) + " - " + user + " wants to merge " + head_ref + " into " + base_ref
# Merge the actual pull request
def merge_pr():
r = requests.put("https://api.github.com/repos/:owner/%s/pulls/%d/merge"%(repo,pr_id,),
data=json.dumps({"commit_message": "Auto_Merge"}),
auth=('token', OAUTH_KEY))
if "merged" in r.json() and r.json()["merged"]==True:
print "Merged: " + r.json()['sha']
else:
print "Failed: " + r.json()['message']
# Main
print datetime.datetime.now()
for repo in repos:
r = requests.get('https://api.github.com/repos/:owner/%s/pulls' % repo, auth=('token', OAUTH_KEY))
data = r.json()
for i in data:
head_ref=i["head"]["ref"]
base_ref=i["base"]["ref"]
user=i["user"]["login"]
pr_id = i["number"]
if base_ref in ignore_branches:
print_message(False)
else:
print_message(True)
merge_pr()
which line of code is showing a problem?
if it is this line:
'else:
message = "Not merging: "
print message + str(pr_id) + " - " + user + " wants to merge " + head_ref + " into " + base_ref'
then try putting this code right below
if merging == True:
message = "Merging: "
:
elif message == False:
message = "Not merging: "
print message + pr_id + " - " + user + " wants to merge " + head_ref + " into " + base_ref ''

Categories