I have a python script that logs me in to a service. I do:
./login.py user#email.com 'pass'
in order to log in.
When I enter this command directly I successfully login. When I run the following script server returns 400.
PYAPIROOT="scriptpath/script"
PYLOGIN="./login.py"
LOGIN="user#email.com"
PASS="'pass'"
function login {
echo -----------------------------
echo
cd $PYAPIROOT
echo "Logging in "$LOGIN
python "$PYLOGIN" "$LOGIN" "$PASS"
echo $PYLOGIN $LOGIN $PASS
echo -----------------------------
}
login
When I copy and run what is echo'ed I get 200.
Why can't I log in using my script?
I suspect it's the double-quoting:
PASS="'pass'"
Use this instead:
PASS="pass"
Related
I'm trying to run this PowerShell command via Python:
sid = utils.execute_powershell(settings.D01_DC1_PORT,
settings.D01_USER,
settings.PASSWORD,
'(Get-ADForest).Domains | '
'%{Get-ADDomain -Server $_}| '
'select domainsid')
The port, the user and the password are all valid. If I run the same script in PowerShell I see values.
Yet, via Python I get this error:
'Unable to contact the server. This may be because this server does not exist, it is currently down, or it does not have the Active Directory Web Services running.'
What is wrong here?
If I change my script so that only one row "returns" from the query, the code passes:
sid = utils.execute_powershell(settings.D01_DC1_PORT,
settings.D01_USER,
settings.PASSWORD,
'(Get-ADForest).Domains | '
'%{Get-ADDomain -Server $_}| '
'select domainsid -First 1')
I have created a Telegram bot using BotFather, https://t.me/botfather
In a Raspberry server, I have some python code answering messages written to bot.
It uses "pyTelegramBotAPI" and is based on this code
https://www.flopy.es/crea-un-bot-de-telegram-para-tu-raspberry-ordenale-cosas-y-habla-con-ella-a-distancia/
Basically it does "bot.polling()"
It works perfect when I write messages into the bot using the smartphone Telegram APP.
The problem is when I write messages into the bot from another computer,
using "bash" + "curl" + "POST"
The server does not receive the bash message, so it does not answer it.
Can someone provide some light on any concept I am missing ?
PD.- the bash+curl code is this one
#!/bin/bash
TOKEN="1436067683:ABGcHbGWS3ek1UdKvyRWC7Xtuv1DuyvT6A4"
ID="304688070"
MENSAJE="La Raspberry te saluda."
URL="https://api.telegram.org/bot${TOKEN}/sendMessage"
curl -s -X POST ${URL} -d chat_id=${ID} -d text="${MENSAJE}"
PD #2 .- now I use "json" and have reached an interesting situation :
curl -v -X POST -H 'Content-Type: application/json' -d '{"chat_id":"${ID}", "text":"${MENSAJE}"}' ${URL_sndmsg}
... produces ...
{"ok":false,"error_code":400,"description":"Bad Request: chat not found"}
... but I did not change ID neither TOKEN ... and old code still finds the chat ...
Strange
the sendMessage endpoint of the bot API is a POST endpoint, and according to the documentation, you need to use JSON data to communicate with it.
the request you made in the question would look like this:
#!/bin/bash
TOKEN="1436067683:ABGcHbGWS3ek1UdKvyRWC7Xtuv1DuyvT6A4"
ID="304688070"
MENSAJE="La Raspberry te saluda."
URL="https://api.telegram.org/bot${TOKEN}/sendMessage"
curl -s -X POST -H 'Content-Type: application/json' -d '{"chat_id":"${ID}", "text":"${MENSAJE}"}' ${URL}
I'd recommend that you don't use the -s option while making tests, that way you can see the output and you could've figure it out.
I have to login to bastion Linux host then run kinit and beeline using pbrun, then sftp csv file to Windows.
Query sample:
"SELECT * FROM db.table WHERE id > 100"
Is there a Python script or tool to automate this?
You could put yours query to file, for example hive_script.sql
and run his from terminal
hive -f hive_script.sql
I want to post my findings.
The most difficult part was figuring out expect+pbrun.
Because there are 2 interactive questions I had to pause for a sec after first question.
My expect code:
#!/usr/bin/expect -f
set timeout 300
set usr [lindex $argv 0];
set pwd [lindex $argv 1];
set query_file [lindex $argv 2];
spawn -noecho pbrun $usr &
expect -re "Password:"
send "$pwd\r"
sleep 1
expect "Enter reason for this privilege access:"
send "test\r"
send "kinit -k -t /opt/Cloudera/keytabs/`whoami`.`hostname -s`.keytab `whoami`/`hostname -f`#YOUR_FQDN_NAME.NET;ssl=true\r"
send "beeline -u 'jdbc:hive2://bigdataplatform-your_dev.net:10000/;principal=hive/bigdataplatform-your_dev.net#YOUR_FQDN_NAME.NET;ssl=true' --outputformat=csv2 --verbose=false --fastConnect=true --silent=true -f $query_file;\r"
expect "*]$\ " {send "exit\r"}
expect eof
Query:
select * from gfocnnsg_work.pytest LIMIT 1000000;
The rest is Python and paramiko.
I create Transport object, execute expect script, and save standard output on Windows OS.
Data access path:
Windows desktop->
SSH->
Linux login->
pbrun service login->
kinit
beeline->
SQL->
save echo on Windows
Here's a Python script with details: hivehoney
Here I'm providing complete details with script etails:
I'm running my script in HUB1.
Firstly,asking users for credentials like username,password,filename,path.
Sample Snipet:
try:
usrname = raw_input('Enter your credentials:')
regex = r'[x][A-Za-z]{6,6}'
sigMatch = re.match(regex,usrname) --> username & pattern matching
sigMatch.group()
except AttributeError:
print "Kindly enter your name"
raise AttributeError
[...]
Immediately I'm doing SSH from HUB1 as below:
ssh.load_system_host_keys()
ssh.connect('100.103.113.15', username='', password='')
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command("cd /home/dir;ls -lrt")
By doing ssh from HUB1, I can virtually login into HUB2 and copy file in
desired directory [ i,e though I run my script in H1 via SSH I can pass
control to HUB2 and execute commands in H2 and able to do desired
operations]
Now, Challenge has come away:
Now I need to copy the same file from HUB2 to HUB3 [which I had copied from
Hub1] by running the same script in Hub1 it self.So,I did scp in SSH as below:
ssh_stdin,ssh_stdout, ssh_stderr = ssh.exec_command("scp %s
usrname#112.186.104.7:/home/dir/ABCdir" %(imageName))
Here It has to prompt for user input for password like Please enter your
password, but above command is directing prompt statement to ssh_stderr
channel. Because of this I'm not able to successfully copy a file when I
execute scp in ssh.exec_command("").
Points to remember:
I don't want to Run two scripts one in H1 and another in H2(for scp) ---> ruled out
My intention is to copy a file from h1 -> h2 -> h3 which are independent each other.
I followed above procedure and In addition to this I tried couple of methods
adding unknown_host_key [by doing this in ,out & err channels are open but file transfer doesn't occur]
By invoking another script in HUB2 [I'm able to invoke scripts which does not require user input but my purpose is to give user input. So not able to trigger another script here].
I request to provide possible ways to nullify the above problem.
I've got some code that I'm using to extract email address from Gmail contacts into text file. It's a simple Python script that runs in a cron job, and is based on the Python gdata library (currently v2.0.18).
As of earlier this month, this no longer works due to Google deprecating the ClientLogin protocol. The resulting error looks like this:
{'status': 401, 'body': '<?xml version="1.0" encoding="UTF-8"?>\n<errors xmlns="http://schemas.google.com/g/2005">\n <error>\n <domain>GData</domain>\n <code>required</code>\n <location type="header">Authorization</location>\n <internalReason>Login Required</internalReason>\n </error>\n</errors>\n', 'reason': 'Unauthorized'}
I knew this was coming and dealt with it in other places (like AppEngine applications), but forgot that I would have to convert this script. Now that I'm in here, I find that I have no idea how I'm supposed to make this work.
All of the references I've found, such as here on the Google Apps Developer Blog or here and here on StackOverflow, suggest that the solution is to use an OAuth2Token. However, that requires a client id and client secret from the Google APIs console -- which is tied to an application. I don't have an application. I just want to authenticate from my script.
Can someone please suggest the proper way to do this in a standalone script? Or am I out of luck and there's no mechanism to accomplish this any more?
This is the guts of the existing contacts code:
from gdata.contacts.service import ContactsService, ContactsQuery
user = "myuser#gmail.com"
password = "mypassword"
addresses = set()
client = ContactsService(additional_headers={"GData-Version":"2"})
client.ssl = True
client.ClientLogin(user, password)
groups = client.GetGroupsFeed()
for group in groups.entry:
if group.content.text == "System Group: My Contacts":
query = ContactsQuery()
query.max_results = 9999 # large enough that we'll get "everything"
query.group = group.id.text
contacts = client.GetContactsFeed(query.ToUri())
for contact in contacts.entry:
for email in contact.email:
addresses.add(email.address.lower())
break
return addresses
Ideally, I'm looking to replace client.ClientLogin() with some other mechanism that preserves the rest of code using gdata. Alternately, if this can't really be done with gdata, I'm open to converting to some other library that offers similar functionality.
It ended up being easier to just hack together a shell script using curl than
mess with the gdata library. As expected, I was able to do most of the
verification process manually, outside of the script, per the
OAuth2 Device Flow instructions.
After finishing the verification process, I had the 4 required credentials:
the client id, the client secret, the access token, and the refresh token.
Per Google's documentation, the access token eventually expires. You can
get a new access token by asking the token manager to refresh the token.
When you do this, you apparently get a new access token, but not a new refresh
token.
I store the client id and secret and the refresh token in the CREDENTIALS
file in JSON format. Since the access token changes over time, it is stored in the ACCESS file, also in JSON format.
The important parts of the script are shown below:
#!/bin/ksh
CLIENT_ID=$(cat ${CREDENTIALS} | jq -r ".client_id")
CLIENT_SECRET=$(cat ${CREDENTIALS} | jq -r ".client_secret")
REFRESH_TOKEN=$(cat ${CREDENTIALS} | jq -r ".refresh_token")
ACCESS_TOKEN=$(cat ${ACCESS} | jq -r ".access_token")
CONTACTS_URL="https://www.google.com/m8/feeds/contacts/default/full?access_token=${ACCESS_TOKEN}&max-results=5000&v=3.0"
ERROR=$(curl --show-error --silent --fail "${CONTACTS_URL}" -o ${CONTACTS_XML} 2>&1)
RESULT=$?
if [[ ${RESULT} -eq 0 ]]
then
cat ${CONTACTS_XML} | grep 'gd:email' | sed 's/^.*address="//g' | sed 's/".*$//g' | tr '[:upper:]' '[:lower:]' | sort | uniq
elif [[ ${RESULT} -eq 22 ]]
then
echo "${ERROR}" | grep -q "401"
if [[ $? -eq 0 ]]
then
TOKEN_URL="https://www.googleapis.com/oauth2/v3/token"
REFRESH_PARAMS="client_id=${CLIENT_ID}&client_secret=${CLIENT_SECRET}&refresh_token=${REFRESH_TOKEN}&grant_type=refresh_token"
ERROR=$(curl --show-error --silent --fail --data "${REFRESH_PARAMS}" ${TOKEN_URL} -o ${REFRESH_JSON})
RESULT=$?
if [[ ${RESULT} -eq 0 ]]
then
ACCESS_TOKEN=$(cat ${REFRESH_JSON} | jq -r ".access_token")
jq -n --arg access_token "${ACCESS_TOKEN}" '{"access_token": $access_token, }' > ${ACCESS}
CONTACTS_URL="https://www.google.com/m8/feeds/contacts/default/full?access_token=${ACCESS_TOKEN}&max-results=5000&v=3.0"
ERROR=$(curl --show-error --silent --fail "${CONTACTS_URL}" -o ${CONTACTS_XML} 2>&1)
RESULT=$?
if [[ ${RESULT} -eq 0 ]]
then
cat ${CONTACTS_XML} | grep 'gd:email' | sed 's/^.*address="//g' | sed 's/".*$//g' | tr '[:upper:]' '[:lower:]' | sort | uniq
else
print "Unexpected error: ${ERROR}" >&2
exit 1
fi
else
print "Unexpected error: ${ERROR}" >&2
exit 1
fi
else
print "Unexpected error: ${ERROR}" >&2
exit 1
fi
else
print "Unexpected error: ${ERROR}" >&2
exit 1
fi
It's not the prettiest thing in the world, but I was looking for something quick-and-dirty, and this works.
Can someone please suggest the proper way to do this in a standalone script? Or am I out of luck and there's no mechanism to accomplish this any more?
There's no mechanism any more like the one you are using. You will have to set up a Cloud Developer project and use OAuth2, and rewrite your script.
To make it as futureproof as possible, you could switch to the newest Contacts API. With this API, you can use the OAuth2 Device flow, which might be simpler for your use case.
Not the answer you were hoping to hear, I know, but I think it's the only answer.