How to use python to create account on a website? - python

I am developing a website that works with Asana, so when a new user registers on my website, they should also be automatically registered on Asana as well.
How can I use python to register a new account on Asana based on the email and password provided on the sign up page of my site?
https://asana.com/

I found the script someone posted for sign up facebook, can it be used to sign up on Asana? "usr" "pwd" should be the input on my website.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
usr=input('Enter Email Id:')
pwd=input('Enter Password:')
driver = webdriver.Chrome()
driver.get('https://www.facebook.com/')
print ("Opened facebook...")
sleep(1)
a = driver.find_element_by_id('email')
a.send_keys(usr)
print ("Email Id entered...")
sleep(1)
b = driver.find_element_by_id('pass')
b.send_keys(pwd)
print ("Password entered...")
c = driver.find_element_by_id('loginbutton')
c.click()
print ("Done...")
sleep(10)
driver.quit()

Related

How to click on an google mail which has an certain link using serenium python?

So I do realize that using gmail api is the best solution, but due to the gmail account having restrictions (school) I can't actually use the api. So while I was searching for the solution I have found about selenium.
I haven't actually found a tutorial about how to filter emails/click on emails that is within 24 hours (which I think I can set it up myself) and click on emails with an link attached (google.meet)
Since the subject isn't always the same nor the sender, I can't actually limit it to the subject and email, so need help with some kind of email body filter.
import webbrowser
from selenium import webdriver
import time
import email
import imaplib
import sys
import datetime
import smtplib
with open('accountdetail.txt', 'r') as file:
for details in file:
username,password = details.split(':')
# create a new Chrome session
driver = webdriver.Chrome('C:\driver\chromedriver.exe')
driver.implicitly_wait(30)
driver.maximize_window()
# navigate to the application home page
driver.get("https://accounts.google.com/")
#get the username textbox
login_field = driver.find_element_by_name("identifier")
login_field.clear()
#enter username
login_field.send_keys(username)
login_field.send_keys(u'\ue007') #unicode for enter key
time.sleep(4)
#get the password textbox
password_field = driver.find_element_by_name("password")
password_field.clear()
#enter password
password_field.send_keys(password)
password_field.send_keys(u'\ue007') #unicode for enter key
time.sleep(10)
#navigate to gmail
driver.get("https://mail.google.com/")
I have found this resources, but for some reason they only work with subject and doesn't actually click on email with a link.
How to click on a Particular email from gmail inbox in Selenium?
https://www.youtube.com/watch?v=6VJaWtz6kzs
If you have the exact link, you can get the element using XPath and click:
url = r'YOUR URL, FROM ANY VARIABLE'
driver.find_element_by_xpath('//a[#href="'+url+'"]').click()

Selenium submit button element not interactable

I posted recently about some trouble I was having with selenium, primarily the anticaptcha API. Ive managed to solve that but I am having some trouble over here. This is my current code:
from time import sleep
from selenium import webdriver
from python_anticaptcha import AnticaptchaClient, NoCaptchaTaskProxylessTask
import os
import time
#Gather Api Key
api_key = 'INSERT API KEY HERE'
#Go to the acc registration site
browser = webdriver.Chrome()
browser.implicitly_wait(5)
browser.get('https://www.reddit.com/register/')
sleep(2)
#Input email
email_input = browser.find_element_by_css_selector("input[name='email']")
email_input.send_keys("INSERT EMAIL HERE")
#Continue to the next part of the registration process
continue_button = browser.find_element_by_xpath("//button[#type='submit']")
continue_button.click()
#Find and input the username and password fields
username_input = browser.find_element_by_css_selector("input[name='username']")
password_input = browser.find_element_by_css_selector("input[name='password']")
username_input.send_keys("INSERT USERNAME HERE")
password_input.send_keys("INSERT PASSWORD HERE")
#Gather site key
url = browser.current_url
site_key = "6LeTnxkTAAAAAN9QEuDZRpn90WwKk_R1TRW_g-JC"
#Acc do the solving process
client = AnticaptchaClient(api_key)
task = NoCaptchaTaskProxylessTask(url, site_key)
job = client.createTask(task)
print("Waiting for recaptcha solution")
job.join()
# Receive response
response = job.get_solution_response()
print(response)
print("Solution has been gotted")
# Inject response in webpage
browser.execute_script('document.getElementById("g-recaptcha-response").innerHTML = "%s"' % (response))
print("Injecting Solution")
# Wait a moment to execute the script (just in case).
time.sleep(1)
print("Solution has been gotted for sure")
# Press submit button
browser.implicitly_wait(10)
Signup = browser.find_element_by_xpath('//input[#type="submit"]')
Signup.click()
Everything runs smoothly except for the final line. I think the program is recognizing the submit button but for some reason gives an element not interactable error. Any help on how to solve this would be greatly appreciated
I had the same issue when I was using selenium. Sometimes it happens that even though selenium has recognized the element, its function is not "ready." Adding a delay before clicking the submit button should fix the issue.

Fetch current birthdays after logging into facebook with pyhhon

I have decided to attempt to create a simple web scraper script in python. As a small challenge I decided to create a script which will be able to log me into facebook and fetch the current birthdays displayed in the side. I have managed to write a script which is able to log me into my facebook, however I have no idea how to fetch the birthdays displayed.
This is my scrypt.
from selenium import webdriver
from time import sleep
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.options import Options
usr = 'EMAIL'
pwd = 'PASSWORD'
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get('https://www.facebook.com/')
print ("Opened facebook")
sleep(1)
username_box = driver.find_element_by_id('email')
username_box.send_keys(usr)
print ("Email Id entered")
sleep(1)
password_box = driver.find_element_by_id('pass')
password_box.send_keys(pwd)
print ("Password entered")
login_box = driver.find_element_by_id('u_0_b')
login_box.click()
print ("Login Sucessfull")
print ("Fetched needed data")
input('Press anything to quit')
driver.quit()
print("Finished")
This is my first time creating a script of this type. My assumption is that I am supposed to traverse through the children of the "jsc_c_3d" div element until I get to the displayed birthdays. Furthermore the id of this element changes everytime the page is refreshed. Can anyone tell me how this is done or if this is the right way that I should go on about solving this problem?
The div for the birthday after expecting elements:
<div class="" id="jsc_c_3d">
<div class="j83agx80 cbu4d94t ew0dbk1b irj2b8pg">
<div class="qzhwtbm6 knvmm38d"><span class="oi732d6d ik7dh3pa d2edcug0 qv66sw1b c1et5uql
a8c37x1j muag1w35 enqfppq2 jq4qci2q a3bd9o3v knj5qynh oo9gr5id hzawbc8m" dir="auto">
<strong>Bobi Mitrevski</strong>
and
<strong>Trajce Tusev</strong> have birthdays today.</span></div></div></div>
You are correct that you would need to traverse through the inner elements of jsc_c_3d to extract the birthdays that you want. However this whole automated web-scraping is a problem if the id value is dynamic, such that it changes on each occasion. In this case, text parsers such as bs4 would do the job.
With the bs4 approach you simply have to extract the relevant div tags from the DOM and then you can parse the data to obtain the required contents.
More generally, this problem is solvable using the Facebook-API which could be as simple as
import facebook
token = 'a token' # token omitted here, this is the same token when I use in https://developers.facebook.com/tools/explorer/
graph = facebook.GraphAPI(token)
args = {'fields' : 'birthday,name' }
friends = graph.get_object("me/friends",**args)

Python selenium chromedriver instsgram keeps asking for code in sms

Has anyone faced the problem that automation routine for instagram written in python with selenium chromedriver has become difficult to run recently because instagram keeps asking for the code it sends in sms or email?
When you do your normal browser login it asks for the code in sms only once. But when you do it with selenium it asks for it every time.
Here is the code
options = webdriver.ChromeOptions()
#options.add_argument('headless')
#options.add_argument('--headless')
options.add_argument('--disable-logging')
options.add_argument('--log-level=3')
driver = webdriver.Chrome(chrome_options=options)
#driver = webdriver.Chrome()
print('Driver started successfully!')
driver.get("https://instagram.com/")
time.sleep(6)
pg=driver.find_element_by_tag_name("html")
lng=pg.get_attribute("lang")
#print(lng)
if lng=='en':
global lin
global foll
global foll_tx
global subscr_tx
lin="Log in"
foll="followers"
foll_tx="Follow"
subscr_tx="following"
get_enter_bt
= driver.find_elements_by_link_text(self.lin)
lin_found=False
while not lin_found:
if len(get_enter_bt)==0:
print('Login not found ((( Refreshing...')
driver.refresh()
time.sleep(6)
get_enter_bt = driver.find_elements_by_link_text(self.lin)
else:
lin_found=True
print('Login button found!')
time.sleep(3)
get_enter_bt[0].click()
time.sleep(3)
#login
login = driver.find_element_by_name("username")
login.send_keys(username)
login = driver.find_element_by_name("password")
login.send_keys(password)
login.send_keys(Keys.RETURN)
time.sleep(9)
get_close_mobapp=driver.find_elements_by_css_selector("button._dbnr9")
if len(get_close_mobapp)!=0:
get_close_mobapp[0].click()
notif_switch=driver.find_elements_by_css_selector("button.aOOlW.HoLwm")
print('notif butt %s' % len(notif_switch))
if len(notif_switch)>0:
notif_switch[0].click()
print(1)
#detect suspicious login
susp_login_msg=driver.find_element_by_xpath("//*[#id=\"react-root\"]/section/div/div/div[1]/div/p")#<p class="O4QwN">Подозрительная попытка входа</p>
print('susm login msg %s' % (susp_login_msg!=None))
if susp_login_msg:
if susp_login_msg.text=='Подозрительная попытка входа':
try:
mobile_button = driver.find_element_by_xpath("//*[#id=\"react-root\"]/section/div/div/div[3]/form/div/div[2]/label")
mobile_button.click()
except:
mobile_button = driver.find_element_by_xpath("//*[#id=\"react-root\"]/section/div/div/div[3]/form/div/div[1]/label")
mobile_button.click()
snd_code_btn=driver.find_element_by_xpath("//*[#id=\"react-root\"]/section/div/div/div[3]/form/span/button")
snd_code_btn.click()
print('Instagram detected an unusual login attempt')
print('A security code was sent to your mobile '+mobile_button.text)
security_code = input('Type the security code here: ')
#security_code_field = driver.find_element_by_xpath(("//input[#id='security_code']"))
security_code_field.send_keys(security_code)
‐----‐‐‐----
This code works fine, but how to stop instagram from asking for the code in sms every time? Does it detect that I run selenium and runs a kind of antibot activity?
I was running the script on schedule to perform series of likes for my subscribers for example which is time consuming you know and automation was my remedy)))

python- using splinter to open and login webpage but need to save the complete webpage

I am using splinter to take and email and password then open up facebook in firefox and login which can be seen in the code below.
this all works fine but Im looking for a way to save the webpage once logged in from looking around splinter can not do this also looked at selenium which didnt seem to be able to do it either. is there any ways of doing this?
from splinter import Browser
# takes the email address for the facebook account needed
user_email = raw_input("enter users email address ")
# takes the oassword for the user account needed
user_pass = raw_input("enter users password ")
# loads the firefox broswer
browser= Browser('firefox')
#selects facebook as the website to load in the browser
browser.visit('http://www.facebook.com')
# fills the email field in the facebook login section
browser.fill('email', user_email)
browser.fill('pass', user_pass)
#selects the login button on the facebook page to log in with details given
button = browser.find_by_id('u_0_d')
button.click()
You can get the webpage content using browser.html.
from splinter import Browser
user_email = raw_input("enter users email address ")
user_pass = raw_input("enter users password ")
browser= Browser('firefox')
browser.visit('http://www.facebook.com')
browser.fill('email', user_email)
browser.fill('pass', user_pass)
#Here is what I made a slight change
button = browser.find_by_id('loginbutton')
button.click()
#I didn't find the page saving function for facebook using Splinter but as an alternative I found screenshot feature.
browser.screenshot()
# This one is working with other websites but for some reason not with facebook.
import urllib2
page = urllib2.urlopen('http://stackoverflow.com')
page_content = page.read()
with open('page_content.html', 'w') as fid:
fid.write(page_content)
#Hope this helps ;)
*Note:- The Saving directory would be Python directory, temp or Desktop.

Categories