I saw a tutorial on YouTube(I can't link it because I can't find it anymore),
So the code is supposed to detect devices that are connected to my Internet/Router, I don't understand a lot about how his(The person who made the tutorial) code works
I also got this error in my console:
File "c:/Users/j/Desktop/Connection-Detection.py", line 6, in
IP_NETWORK = config('IP_NETWORK')
File "C:\Users\j\AppData\Local\Programs\Python\Python38-32\lib\site-packages\decouple.py", line 199, in call
return self.config(*args, **kwargs)
File "C:\Users\j\AppData\Local\Programs\Python\Python38-32\lib\site-packages\decouple.py", line 83, in call
return self.get(*args, **kwargs)
File "C:\Users\j\AppData\Local\Programs\Python\Python38-32\lib\site-packages\decouple.py", line 68, in get
raise UndefinedValueError('{} not found. Declare it as envvar or define a default value.'.format(option))
decouple.UndefinedValueError: IP_NETWORK not found. Declare it as envvar or define a default value.
PS C:\Users\j\Desktop\python\login>
That's "Detection.py"
import sys
import subprocess
import os
from decouple import config
IP_NETWORK = config('IP_NETWORK')
IP_DEVICE = config('IP_DEVICE')
proc = subprocess.Popen(['ping', IP_NETWORK],stdout=subprocess.PIPE)
while True:
line = proc.stdout.readline
if not line:
break
connected_ip = line.decode('utf-8').split()[3]
if connected_ip == IP_DEVICE:
subprocess.Popen(['say', 'Someone connected to network'])
You need to define an environment variable in same directory as the Detection.py file.
Steps
Install python-decouple - pip install python-decouple.
Create a file called .env
Open the .env file and paste the following into it.
IP_NETWORK=YOUR_IP_NETWORK
IP_DEVICE=YOUR_IP_DEVICE
Replace YOUR_IP_NETWORK and YOUR_IP_DEVICE with your IP_NETWORK and IP_DEVICE
Related
I'm having issues deploying my Panel app to be served up as HTML.
Following instructions at https://panel.holoviz.org/user_guide/Running_in_Webassembly.html, with script.py as
import panel as pn
from mymodule import MyObj
pn.extension('terminal', 'tabulator', sizing_mode="stretch_width")
gspec = pn.GridSpec(sizing_mode='stretch_width', max_height=100, ncols=3)
PI = MyObj()
gspec[0, 0:1] = PI.tabs
gspec[0, 2] = PI.view
pn.Column(
"Text Text Text"
, gspec
).servable()
and mymodule/__init__.py:
__version__ = '0.1.0'
from .my_obj import MyObj
and mymodule/my_obj.py:
from .param_classes import ClassA
from .param_classes import ClassB
# etc
from .compute_obj import ComputeObj
class MyObj(object):
# all the details of the panel build, calling in turn
# param classes detailed in another file, and also calling another module
# to handle all the computation behind the panel
panel serve script.py --autoreload works perfectly, but
$ panel convert script.py --to pyodide-worker --out pyodide
$ python3 -m http.server
does not work. I get a big display at http://localhost:8000/pyodide/script.html: ModuleNotFoundError: no module named 'mymodule', and an infinite loop spinning graphic, and in the Developer Tools Console output:
pyodide.asm.js:10 Uncaught (in promise) PythonError: Traceback (most recent call last):
File "/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception
File "/lib/python3.10/asyncio/tasks.py", line 232, in __step
result = coro.send(None)
File "/lib/python3.10/site-packages/_pyodide/_base.py", line 506, in eval_code_async
await CodeRunner(
File "/lib/python3.10/site-packages/_pyodide/_base.py", line 359, in run_async
await coroutine
File "<exec>", line 11, in <module>
ModuleNotFoundError: No module named 'mymodule'
at new_error (pyodide.asm.js:10:218123)
at pyodide.asm.wasm:0xdef7c
at pyodide.asm.wasm:0xe37ae
at method_call_trampoline (pyodide.asm.js:10:218037)
at pyodide.asm.wasm:0x126317
at pyodide.asm.wasm:0x1f6f2e
at pyodide.asm.wasm:0x161a32
at pyodide.asm.wasm:0x126827
at pyodide.asm.wasm:0x126921
at pyodide.asm.wasm:0x1269c4
at pyodide.asm.wasm:0x1e0697
at pyodide.asm.wasm:0x1da6a5
at pyodide.asm.wasm:0x126a07
at pyodide.asm.wasm:0x1e248c
at pyodide.asm.wasm:0x1e00d9
at pyodide.asm.wasm:0x1da6a5
at pyodide.asm.wasm:0x126a07
at pyodide.asm.wasm:0xe347a
at Module.callPyObjectKwargs (pyodide.asm.js:10:119064)
at Module.callPyObject (pyodide.asm.js:10:119442)
at wrapper (pyodide.asm.js:10:183746)
I should add that I'm using poetry to manage packages and build the venv, and I'm operating all the above from within the activated .venv (via poetry shell)
I've tried all the tips and tricks around append the local path to sys.path. Looking at the .js file that the convert utility generates, I gather it would work if all the code were in one file, but forcing bad coding practice doesn't sound right.
I imagine there could be some kind of C++ build-style --include argument to panel convert, but there are no man pages, and the closest I can get with online documentation is --requirements ./mymodule, but no joy.
Could anybody please advise?
Copied from here: https://github.com/kubeflow/pipelines/issues/7608
I have a generated code file that runs against Kubeflow. It ran fine on Kubeflow v1, and now I'm moving it to Kubeflow v2. When I do this, I get the following error:
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
I honestly don't even know where to go next. It feels like something is fundamentally broken for something to fail in the first character, but I can't see it (it's inside the kubeflow execution).
Thanks!
Environment
How did you deploy Kubeflow Pipelines (KFP)?
Standard deployment to AWS
KFP version:
1.8.1
KFP SDK version:
1.8.12
Here's the logs:
time="2022-04-26T17:38:09.547Z" level=info msg="capturing logs" argo=true
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead:
https://pip.pypa.io/warnings/venv
[KFP Executor 2022-04-26 17:38:24,691 INFO]: Looking for component `run_info_fn` in --component_module_path `/tmp/tmp.NJW6PWXpIt/ephemeral_component.py`
[KFP Executor 2022-04-26 17:38:24,691 INFO]: Loading KFP component "run_info_fn" from /tmp/tmp.NJW6PWXpIt/ephemeral_component.py (directory "/tmp/tmp.NJW6PWXpIt" and module name "ephemeral_component")
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/executor_main.py", line 104, in <module>
executor_main()
File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/executor_main.py", line 94, in executor_main
executor_input = json.loads(args.executor_input)
File "/usr/local/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.7/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
time="2022-04-26T17:38:24.803Z" level=error msg="cannot save artifact /tmp/outputs/run_info/data" argo=true error="stat /tmp/outputs/run_info/data: no such file or directory"
Error: exit status 1
Here's the files to repro:
root_pipeline_04d99580c84b47c28405a2c8bcae8703.py
import kfp.v2.components
from kfp.v2.dsl import InputPath
from kubernetes.client.models import V1EnvVar
from kubernetes import client, config
from typing import NamedTuple
from base64 import b64encode
import kfp.v2.dsl as dsl
import kubernetes
import json
import kfp
from run_info import run_info_fn
from same_step_000_ce6494722c474dd3b8bef482bb976557 import same_step_000_ce6494722c474dd3b8bef482bb976557_fn
run_info_comp = kfp.v2.dsl.component(
func=run_info_fn,
packages_to_install=[
"kfp",
"dill",
],
)
same_step_000_ce6494722c474dd3b8bef482bb976557_comp = kfp.v2.dsl.component(
func=same_step_000_ce6494722c474dd3b8bef482bb976557_fn,
base_image="public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/codeserver-python:v1.5.0",
packages_to_install=[
"dill",
"requests",
# TODO: make this a loop
],
)
#kfp.dsl.pipeline(name="root_pipeline_compilation",)
def root(
context: str='', metadata_url: str='',
):
# Generate secrets (if not already created)
secrets_by_env = {}
env_vars = {
}
run_info = run_info_comp(run_id=kfp.dsl.RUN_ID_PLACEHOLDER)
same_step_000_ce6494722c474dd3b8bef482bb976557 = same_step_000_ce6494722c474dd3b8bef482bb976557_comp(
input_context_path="",
run_info=run_info.outputs["run_info"],
metadata_url=metadata_url
)
same_step_000_ce6494722c474dd3b8bef482bb976557.execution_options.caching_strategy.max_cache_staleness = "P0D"
for k in env_vars:
same_step_000_ce6494722c474dd3b8bef482bb976557.add_env_variable(V1EnvVar(name=k, value=env_vars[k]))
run_info.py
"""
The run_info component fetches metadata about the current pipeline execution
from kubeflow and passes it on to the user code step components.
"""
from typing import NamedTuple
def run_info_fn(
run_id: str,
) -> NamedTuple("RunInfoOutput", [("run_info", str),]):
from base64 import urlsafe_b64encode
from collections import namedtuple
import datetime
import base64
import dill
import kfp
client = kfp.Client(host="http://ml-pipeline:8888")
run_info = client.get_run(run_id=run_id)
run_info_dict = {
"run_id": run_info.run.id,
"name": run_info.run.name,
"created_at": run_info.run.created_at.isoformat(),
"pipeline_id": run_info.run.pipeline_spec.pipeline_id,
}
# Track kubernetes resources associated wth the run.
for r in run_info.run.resource_references:
run_info_dict[f"{r.key.type.lower()}_id"] = r.key.id
# Base64-encoded as value is visible in kubeflow ui.
output = urlsafe_b64encode(dill.dumps(run_info_dict))
return namedtuple("RunInfoOutput", ["run_info"])(
str(output, encoding="ascii")
)
same_step_000_ce6494722c474dd3b8bef482bb976557.py
import kfp
from kfp.v2.dsl import component, Artifact, Input, InputPath, Output, OutputPath, Dataset, Model
from typing import NamedTuple
def same_step_000_ce6494722c474dd3b8bef482bb976557_fn(
input_context_path: InputPath(str),
output_context_path: OutputPath(str),
run_info: str = "gAR9lC4=",
metadata_url: str = "",
):
from base64 import urlsafe_b64encode, urlsafe_b64decode
from pathlib import Path
import datetime
import requests
import tempfile
import dill
import os
input_context = None
with Path(input_context_path).open("rb") as reader:
input_context = reader.read()
# Helper function for posting metadata to mlflow.
def post_metadata(json):
if metadata_url == "":
return
try:
req = requests.post(metadata_url, json=json)
req.raise_for_status()
except requests.exceptions.HTTPError as err:
print(f"Error posting metadata: {err}")
# Move to writable directory as user might want to do file IO.
# TODO: won't persist across steps, might need support in SDK?
os.chdir(tempfile.mkdtemp())
# Load information about the current experiment run:
run_info = dill.loads(urlsafe_b64decode(run_info))
# Post session context to mlflow.
if len(input_context) > 0:
input_context_str = urlsafe_b64encode(input_context)
post_metadata(
{
"experiment_id": run_info["experiment_id"],
"run_id": run_info["run_id"],
"step_id": "same_step_000",
"metadata_type": "input",
"metadata_value": input_context_str,
"metadata_time": datetime.datetime.now().isoformat(),
}
)
# User code for step, which we run in its own execution frame.
user_code = f"""
import dill
# Load session context into global namespace:
if { len(input_context) } > 0:
dill.load_session("{ input_context_path }")
{dill.loads(urlsafe_b64decode("gASVGAAAAAAAAACMFHByaW50KCJIZWxsbyB3b3JsZCIplC4="))}
# Remove anything from the global namespace that cannot be serialised.
# TODO: this will include things like pandas dataframes, needs sdk support?
_bad_keys = []
_all_keys = list(globals().keys())
for k in _all_keys:
try:
dill.dumps(globals()[k])
except TypeError:
_bad_keys.append(k)
for k in _bad_keys:
del globals()[k]
# Save new session context to disk for the next component:
dill.dump_session("{output_context_path}")
"""
# Runs the user code in a new execution frame. Context from the previous
# component in the run is loaded into the session dynamically, and we run
# with a single globals() namespace to simulate top-level execution.
exec(user_code, globals(), globals())
# Post new session context to mlflow:
with Path(output_context_path).open("rb") as reader:
context = urlsafe_b64encode(reader.read())
post_metadata(
{
"experiment_id": run_info["experiment_id"],
"run_id": run_info["run_id"],
"step_id": "same_step_000",
"metadata_type": "output",
"metadata_value": context,
"metadata_time": datetime.datetime.now().isoformat(),
}
)
Python file to execute to run:
from sameproject.ops import helpers
from pathlib import Path
import importlib
import kfp
def deploy(compiled_path: Path, root_module_name: str):
with helpers.add_path(str(compiled_path)):
kfp_client = kfp.Client() # only supporting 'kubeflow' namespace
root_module = importlib.import_module(root_module_name)
return kfp_client.create_run_from_pipeline_func(
root_module.root,
arguments={},
)
Turns out it has to do with not compiling with the right execution mode on.
If you're getting this, your code should look like this.
Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(pipeline_func=root_module.root, package_path=str(package_yaml_path))
I'm attempting to create a function to load Sagemaker models within a jupyter notebook using shell commands. The problem arises when I try to store the function in a utilities.py file and source it for multiple notebooks.
Here are the contents of the utilities.py file that I am sourcing in a jupyter lab notebook.
def get_aws_sagemaker_model(model_loc):
"""
TO BE USED IN A JUPYTER NOTEBOOK
extracts a sagemaker model that has ran and been completed
deletes the copied items and leaves you with the model
note that you will need to have the package installed with correct
versioning for whatever model you have trained
ie. if you are loading an XGBoost model, have XGBoost installed
Args:
model_loc (str) : s3 location of the model including file name
Return:
model: unpacked and loaded model
"""
import re
import tarfile
import os
import pickle as pkl
# extract the filename from beyond the last backslash
packed_model_name = re.search("(.*\/)(.*)$" , model_loc)[2]
# copy and paste model file locally
command_string = "!aws s3 cp {model_loc} ."
exec(command_string)
# use tarfile to extract
tar = tarfile.open(packed_model_name)
# extract filename from tarfile
unpacked_model_name = tar.getnames()[0]
tar.extractall()
tar.close()
model = pkl.load(open(unpacked_model_name, 'rb'))
# cleanup copied files and unpacked model
os.remove(packed_model_name)
os.remove(unpacked_model_name)
return model
The error occurs when trying to execute the command string:
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/env/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3444, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/tmp/ipykernel_10889/996524724.py", line 1, in <module>
model = get_aws_sagemaker_model("my-model-loc")
File "/home/ec2-user/SageMaker/env/src/utilities/model_helper_functions.py", line 167, in get_aws_sagemaker_model
exec(command_string)
File "<string>", line 1
!aws s3 cp my-model-loc .
^
SyntaxError: invalid syntax
It seems like jupyter isn't receiving the command before exec checks the syntax. Is there a way around this besides copying the function into each jupyter notebook that I use?
Thank you!
You can use transform_cell method of IPython's shell to transform the IPython syntax into valid plain-Python:
from IPython import get_ipython
ipython = get_ipython()
code = ipython.transform_cell('!ls')
print(code)
which will show:
get_ipython().system('!ls')
You can use that as input for exec:
exec(code)
Or directly:
exec(ipython.transform_cell('!ls'))
A ! magic can be included in a function, but can't be performed via exec.
def foo(astr):
!ls $astr
foo('*.py')
will do the same as
!ls *.py
im trying to do a instagram bot but i just could run the code once, and it worked fine but when i tried again it dropped me this error
i wont write my user and pass in this question obviously haha
from instabot import *
session = Bot()
session.login(username = "myuser",
password = "mypass")
and i get this error
2021-02-01 16:07:42,401 - INFO - Instabot version: 0.117.0 Started
Traceback (most recent call last):
File "C:/Users/EQUIPO/Desktop/5 CUATRI/Phyton/Ejercicios Prueba/nsoe.py", line 3, in <module>
session.login(username = "nota.niceplace",
File "C:\Program Files\Python38\lib\site-packages\instabot\bot\bot.py", line 443, in login
if self.api.login(**args) is False:
File "C:\Program Files\Python38\lib\site-packages\instabot\api\api.py", line 240, in login
self.load_uuid_and_cookie(load_cookie=use_cookie, load_uuid=use_uuid)
File "C:\Program Files\Python38\lib\site-packages\instabot\api\api.py", line 199, in load_uuid_and_cookie
return load_uuid_and_cookie(self, load_uuid=load_uuid, load_cookie=load_cookie)
File "C:\Program Files\Python38\lib\site-packages\instabot\api\api_login.py", line 354, in load_uuid_and_cookie
self.cookie_dict["urlgen"]
KeyError: 'urlgen'
You need to delete config folder which is automatically created by system when you run this code first time.
So you need to delete this folder every time before run your code.
This config folder is in same directory where your file is saved.
I had same problem and i am able to solve by this way.
This code will fix everything automatically for you just change image name and path.
from instabot import Bot
import os
import shutil
def clean_up(i):
dir = "config"
remove_me = "imgs\{}.REMOVE_ME".format(i)
# checking whether config folder exists or not
if os.path.exists(dir):
try:
# removing it so we can upload new image
shutil.rmtree(dir)
except OSError as e:
print("Error: %s - %s." % (e.filename, e.strerror))
if os.path.exists(remove_me):
src = os.path.realpath("imgs\{}".format(i))
os.rename(remove_me, src)
def upload_post(i):
bot = Bot()
bot.login(username="your_username", password="your_password")
bot.upload_photo("imgs/{}".format(i), caption="Caption for the post")
if __name__ == '__main__':
# enter name of your image bellow
image_name = "img.jpg"
clean_up(image_name)
upload_post(image_name)
As Raj said. You need to delete the config folder like this from where you ran the python script. Sorry for making a separate answer but for me it was not immediate clear where this config folder was so I thought it might help.
rm -rf config
Unfortunately the library is quite poor with their error messages
Unfortunately, it seems that instabot library's maintenance is down, don't try to run anything using that lib, won't work, and might, as commented by some users, compromise your account! sorry..
I've just finished testing a Python programme which involves logging into a site and requires a CSRF cookie to be set. I've tried packaging it as an exe using py2exe and got a socket error. I have the same problem when I try with PyInstaller. Googling the Errno I found a few other people with the same problem and so I know the problem is to do with the location of SLL certificates.
This is my site_agent class including the logging calls.
class site_agent:
self.get_params()
URL = root_url + '/accounts/login/'
# Retrieve the CSRF token first
self.agent = requests.session()
self.agent.get(URL) # retrieves the cookie # This line throws the error
self.csrftoken = self.agent.cookies['csrftoken']
# Set up login data including the CSRF cookie
login_data = {'username': self.username,
'password': self.password,
'csrfmiddlewaretoken' : self.csrftoken}
# Log in
logging.info('Logging in')
response = self.agent.post(URL, data=login_data, headers=hdr)
The error comes at the self.agent.get(URL) line and the Traceback shows:
Traceback (most recent call last):
File "<string>", line 223, in <module>
File "<string>", line 198, in main
File "<string>", line 49, in __init__
File "C:\pyinstaller-2.0\pyinstaller-2.0\autoresponder\b
uild\pyi.win32\autoresponder\out00-PYZ.pyz\requests.sessions", line 350, in get
File "C:\pyinstaller-2.0\pyinstaller-2.0\autoresponder\b
uild\pyi.win32\autoresponder\out00-PYZ.pyz\requests.sessions", line 338, in requ
est
File "C:\pyinstaller-2.0\pyinstaller-2.0\autoresponder\b
uild\pyi.win32\autoresponder\out00-PYZ.pyz\requests.sessions", line 441, in send
File "C:\pyinstaller-2.0\pyinstaller-2.0\autoresponder\b
uild\pyi.win32\autoresponder\out00-PYZ.pyz\requests.adapters", line 331, in send
requests.exceptions.SSLError: [Errno 185090050] _ssl.c:336: error:0B084002:x509
certificate routines:X509_load_cert_crl_file:system lib
Does this mean that the problem is in requests.adapters?
If so, can I just edit it in my installed Python packages to look for cacert.pem somewhere else, rebuild my exe with py2exe or PyInstaller, then change it back in my installed version of Python?
EDIT
I now have the programme running after compiling with PyInstaller and setting verify=False in all requests.get() and requests.post() calls. But SSL is there for a a reason and I'd really like to be able to fix this error before letting anyone use the tool.
This can be solved easily if you are using "requests" module.
1) Place the below code in your main python file where "requests" module is used
os.environ['REQUESTS_CA_BUNDLE'] = "certifi/cacert.pem"
2) Within your distributable folder where exe is present, create a folder called "certifi" and place the "cacert.pem" file within it.
3) You can find the "cacert.pem" file by
pip install certifi
import certifi
certifi.where()
Awesome.. now your distributable includes the necessary certificates to validate ssl calls.
If using pyinstaller... create a hook-requests.py file in PyInstaller\hooks\ for the requests lib containing
from hookutils import collect_data_files
# Get the cacert.pem
datas = collect_data_files('requests')
In addition to the answer given by frmdstryr, I had to tell requests where the cacert.pem is. I added the following lines right after my imports:
# Get the base directory
if getattr( sys , 'frozen' , None ): # keyword 'frozen' is for setting basedir while in onefile mode in pyinstaller
basedir = sys._MEIPASS
else:
basedir = os.path.dirname( __file__ )
basedir = os.path.normpath( basedir )
# Locate the SSL certificate for requests
os.environ['REQUESTS_CA_BUNDLE'] = os.path.join(basedir , 'requests', 'cacert.pem')
This solution is derived from this link: http://kittyandbear.net/python/pyinstaller-request-sslerror-manual-cacert-solution.txt
I was struggling with this as well, The hook-requests.py provided by frmdstryr did not work for me. But this modified one did:
from PyInstaller.utils.hooks import collect_data_files
# Get the cacert.pem
datas = collect_data_files('certifi')