AzureML Python SDK OutputFileDatasetConfig and Datastore - python

I am new to Azure and dealing with all these paths is proving to be extremely challenging. I am trying to create a pipeline that contains a dataprep.py step and an AutoML step. What i want to do is (after passing the input to the dataprep block and performing several operations on it) to save the resulting tabulardataset in the datastore and have it as an output to then be able to reuse in my train block.
My dataprep.py file
-----dataprep stuff and imports
parser = argparse.ArgumentParser()
parser.add_argument("--month_train", required=True)
parser.add_argument("--year_train", required=True)
parser.add_argument('--output_path', dest = 'output_path', required=True)
args = parser.parse_args()
run = Run.get_context()
ws = run.experiment.workspace
datastore = ws.get_default_datastore()
name_dataset_input = 'Customer_data_'+str(args.year_train)
name_dataset_output = 'DATA_PREP_'+str(args.year_train)+'_'+str(args.month_train)
# get the input dataset by name
ds = Dataset.get_by_name(ws, name_dataset_input)
df = ds.to_pandas_dataframe()
# apply is one of my dataprep functions that i defined earlier
df = apply(df, args.mois_train)
# this is where i am having issues, I want to save this in the datastore but also have it as output
ds = Dataset.Tabular.register_pandas_dataframe(df, args.output_path ,name_dataset_output)
The pipeline block instructions.
from azureml.data import OutputFileDatasetConfig
from azureml.pipeline.steps import PythonScriptStep
prepped_data_path = OutputFileDatasetConfig(name="output_path", destination = (datastore, 'managed-dataset/{run-id}/{output-name}'))
dataprep_step = PythonScriptStep(
name="dataprep",
script_name="dataprep.py",
compute_target=compute_target,
runconfig=aml_run_config,
arguments=["--output_path", prepped_data_path, "--month_train", month_train,"--year_train",year_train],
allow_reuse=True

Related

How to serialize as string the key in kafka messages

I am using confluent-kafka and I need to serialize my keys as strings and produce some messages. I have a working code for the case where I retrieve the schema from the schema registry and use it to produce a message. The problem is that it fails when I am trying to read the schema from a local file instead.
The code below is the working one for the schema registry:
import argparse
from confluent_kafka.schema_registry.avro import AvroSerializer
from confluent_kafka.serialization import StringSerializer
from confluent_kafka.schema_registry import SchemaRegistryClient
from confluent_kafka import SerializingProducer
import avro.schema
SCHEMA_HOST = '192.168.40.10'
TOPIC = 'my_topic'
SCHEMA = 'path/to/schema.avsc'
# Just parse argumments
parser = argparse.ArgumentParser(description="Avro Kafka Generator")
parser.add_argument('--schema_registry_host', default=SCHEMA_HOST, help="schema registry host")
parser.add_argument('--schema', type=str, default=SCHEMA, help="schema to produce under")
parser.add_argument('--topic', type=str, default=TOPIC, help="topic to publish to")
parser.add_argument('--frequency', type=float, default=1.0, help="number of message per second")
args = parser.parse_args()
# Actual code
schema_registry_conf = {'url': "http://{}:8081".format(SCHEMA_HOST)}
schema_registry_client = SchemaRegistryClient(schema_registry_conf)
schema = schema_registry_client.get_latest_version(subject_name=TOPIC + "-value")
# schema = schema_registry_client.get_schema(schema.schema_id)
schema = schema_registry_client.get_schema(schema.schema_id)
schema_str = schema.schema_str
pro_conf = {"auto.register.schemas": False}
avro_serializer = AvroSerializer(schema_registry_client=schema_registry_client, schema_str=schema_str, conf=pro_conf)
conf = {'bootstrap.servers': "{}:9095".format(args.schema_registry_host),
'schema.registry.url': "http://{}:8081".format(args.schema_registry_host)}
# avro_producer = AvroProducer(conf, default_value_schema=value_schema)
producer_conf = {'bootstrap.servers': "{}:9095".format(SCHEMA_HOST),
'key.serializer': StringSerializer('utf_8'),
'value.serializer': avro_serializer}
avro_producer = SerializingProducer(producer_conf)
But when I try to use a variation for the local file it fails:
# Read schema from local file
value_schema = avro.schema.Parse(open(args.schema, "r").read())
schema_str = open(args.schema, "r").read().replace(' ', '').replace('\n', '')
pro_conf = {"auto.register.schemas": True}
avro_serializer = AvroSerializer(schema_registry_client=schema_registry_client, schema_str=schema_str, conf=pro_conf)
this part is common to both versions:
producer_conf = {'bootstrap.servers': "{}:9095".format(SCHEMA_HOST),
'key.serializer': StringSerializer('utf_8'),
'value.serializer': avro_serializer}
avro_producer = SerializingProducer(producer_conf)
avro_producer.produce(topic=args.topic, value=message)
The error I am getting is the following;
KafkaError{code=_VALUE_SERIALIZATION,val=-161,str="'RecordSchema'
object has no attribute 'lookup_schema'"}
Obviously, it's not the best approach and I guess if it worked the code seems ugly and error prone. But it doesn't even work so, I need some help on how I could read a local schema and use the AvroSerializer afterwards.

Unable to add Task To Azure Batch Model

I am trying to create Azure Batch Job with Task which uses output_files as a Task Parameter
tasks = list()
command_task = (r"cmd /c dir")
# Not providing actual property value for security purpose
containerName = r'ContainerName'
azureStorageAccountName = r'AccountName'
azureStorageAccountKey = r'AccountKey'
sas_Token = generate_account_sas(account_name=azureStorageAccountName, account_key=azureStorageAccountKey, resource_types=ResourceTypes(object=True), permission=AccountSasPermissions(read=True, write=True), expiry=datetime.datetime.utcnow() + timedelta(hours=1))
url = f"https://{azureStorageAccountName}.blob.core.windows.net/{containerName}?{sas_Token}"
output_file = batchmodels.OutputFile(
file_pattern=r"..\std*.txt",
destination=batchmodels.OutputFileDestination(
container=batchmodels.OutputFileBlobContainerDestination(container_url=url),
path="abc"),
upload_options='taskCompletion')
tasks.append(batchmodels.TaskAddParameter(id='Task1', display_name='Task1', command_line=command_task, user_identity=user, output_files=[output_file]))
batch_service_client.task.add_collection(job_id, tasks)
On Deugging this code I am getting exception as
But on removing the output_files parameter , everything works fine and Job is created with task.
I missed out upload_options object while creating OutputFile object:
output_file = batchmodels.OutputFile(
file_pattern=r"..\std*.txt",
destination=batchmodels.OutputFileDestination(
container=batchmodels.OutputFileBlobContainerDestination(container_url=url),
path="abc"),
upload_options=batchmodels.OutputFileUploadOptions('taskCompletion'))

How can I save the headers and values in Html <script> as a table in the csv file?

I'm new to writing code. Using slenium and beautifulsoup, I managed to reach the script I want among dozens of scripts on the web page. I am looking for script [17]. When these codes are executed, the script [17] gives a result as follows.
the last part of my codes
html=driver.page_source
soup=BeautifulSoup(html, "html.parser")
scripts=soup.find_all("script")
x=scripts[17]
print(x)
result, output
note: The list of dates is ahead of the script [17]. slide the bar. Dummy Data
Dummy Data
<script language="JavaScript"> var theHlp='/yardim/matris.asp';var theTitle = 'Piyasa Değeri';var theCaption='Cam (TL)';var lastmod = '';var h='<a class=hisselink href=../Hisse/HisseAnaliz.aspx?HNO=';var e='<a class=hisselink href=../endeks/endeksAnaliz.aspx?HNO=';var d='<center><font face=symbol size=1 color=#FF0000><b>ß</b></font></center>';var u='<center><font face=symbol size=1 color=#008000><b>İ</b></font></center>';var n='<center><font face=symbol size=1 color=#00A000><b>=</b></font></center>';var fr='<font color=#FF0000>';var fg='<font color=#008000>';var theFooter=new Array();var theCols = new Array();theCols[0] = new Array('Hisse',4,50);theCols[1] = new Array('2012.12',1,60);theCols[2] = new Array('2013.03',1,60);theCols[3] = new Array('2013.06',1,60);theCols[4] = new Array('2013.09',1,60);theCols[5] = new Array('2013.12',1,60);theCols[6] = new Array('2014.03',1,60);theCols[7] = new Array('2014.06',1,60);theCols[8] = new Array('2014.09',1,60);theCols[9] = new Array('2014.12',1,60);theCols[10] = new Array('2015.03',1,60);theCols[11] = new Array('2015.06',1,60);theCols[12] = new Array('2015.09',1,60);theCols[13] = new Array('2015.12',1,60);theCols[14] = new Array('2016.03',1,60);theCols[15] = new Array('2016.06',1,60);theCols[16] = new Array('2016.09',1,60);theCols[17] = new Array('2016.12',1,60);theCols[18] = new Array('2017.03',1,60);theCols[19] = new Array('2017.06',1,60);theCols[20] = new Array('2017.09',1,60);theCols[21] = new Array('2017.12',1,60);theCols[22] = new Array('2018.03',1,60);theCols[23] = new Array('2018.06',1,60);theCols[24] = new Array('2018.09',1,60);theCols[25] = new Array('2018.12',1,60);theCols[26] = new Array('2019.03',1,60);theCols[27] = new Array('2019.06',1,60);theCols[28] = new Array('2019.09',1,60);theCols[29] = new Array('2019.12',1,60);theCols[30] = new Array('2020.03',1,60);var theRows = new Array();
theRows[0] = new Array ('<b>'+h+'30>ANA</B></a>','1,114,919,783.60','1,142,792,778.19','1,091,028,645.38','991,850,000.48','796,800,000.38','697,200,000.34','751,150,000.36','723,720,000.33','888,000,000.40','790,320,000.36','883,560,000.40','927,960,000.42','737,040,000.33','879,120,000.40','914,640,000.41','927,960,000.42','1,172,160,000.53','1,416,360,000.64','1,589,520,000.72','1,552,500,000.41','1,972,500,000.53','2,520,000,000.67','2,160,000,000.58','2,475,000,000.66','2,010,000,000.54','2,250,000,000.60','2,077,500,000.55','2,332,500,000.62','3,270,000,000.87','2,347,500,000.63');
theRows[1] = new Array ('<b>'+h+'89>DEN</B></a>','55,200,000.00','55,920,000.00','45,960,000.00','42,600,000.00','35,760,000.00','39,600,000.00','40,200,000.00','47,700,000.00','50,460,000.00','45,300,000.00','41,760,000.00','59,340,000.00','66,600,000.00','97,020,000.00','81,060,000.00','69,300,000.00','79,800,000.00','68,400,000.00','66,900,000.00','66,960,000.00','71,220,000.00','71,520,000.00','71,880,000.00','60,600,000.00','69,120,000.00','62,640,000.00','57,180,000.00','89,850,000.00','125,100,000.00','85,350,000.00');
theRows[2] = new Array ('<b>'+h+'269>SIS</B></a>','4,425,000,000.00','4,695,000,000.00','4,050,000,000.00','4,367,380,000.00','4,273,120,000.00','3,644,720,000.00','4,681,580,000.00','4,913,000,000.00','6,188,000,000.00','5,457,000,000.00','6,137,000,000.00','5,453,000,000.00','6,061,000,000.00','6,954,000,000.00','6,745,000,000.00','6,519,000,000.00','7,851,500,000.00','8,548,500,000.00','9,430,000,000.00','9,225,000,000.00','10,575,000,000.00','11,610,000,000.00','9,517,500,000.00','13,140,000,000.00','12,757,500,000.00','13,117,500,000.00','11,677,500,000.00','10,507,500,000.00','11,857,500,000.00','9,315,000,000.00');
theRows[3] = new Array ('<b>'+h+'297>TRK</B></a>','1,692,579,200.00','1,983,924,800.00','1,831,315,200.00','1,704,000,000.00','1,803,400,000.00','1,498,100,000.00','1,803,400,000.00','1,884,450,000.00','2,542,160,000.00','2,180,050,000.00','2,069,200,000.00','1,682,600,000.00','1,619,950,000.00','1,852,650,000.00','2,040,600,000.00','2,315,700,000.00','2,641,200,000.00','2,938,800,000.00','3,599,100,000.00','4,101,900,000.00','5,220,600,000.00','5,808,200,000.00','4,689,500,000.00','5,375,000,000.00','3,787,500,000.00','4,150,000,000.00','3,662,500,000.00','3,712,500,000.00','4,375,000,000.00','3,587,500,000.00');
var thetable=new mytable();thetable.tableWidth=650;thetable.shownum=false;thetable.controlaccess=true;thetable.visCols=new Array(true,true,true,true,true);thetable.initsort=new Array(0,-1);thetable.inittable();thetable.refreshTable();</script>
My purpose is to extract this output into a table and save it as a csv file. How can i extract this script as i want?
all dates should be on top, all names should be on the far right, all values should be between the two.
Hisse 2012.12 2013.3 2013.4 ...
ANA 1,114,919,783.60 1,142,792,778.19 1,091,028,645.38 ...
DEN 55,200,000.00 55,920,000.00 45,960,000.00 ....
.
.
.
Solution
The custom-function process_scripts() will produce what you are looking for. I am using the dummy data given below (at the end). First we check that the code does what it is expected and so we create a pandas dataframe to see the output.
You could also open this Colab Jupyter Notebook and run it on Cloud for free. This will allow you to not worry about any installation or setup and simply focus on examining the solution itself.
1. Processing A Single Script
## Define CSV file-output folder-path
OUTPUT_PATH = './output'
## Process scripts
dfs = process_scripts(scripts = [s],
output_path = OUTPUT_PATH,
save_to_csv = False,
verbose = 0)
print(dfs[0].reset_index(drop=True))
Output:
Name 2012.12 ... 2019.12 2020.03
0 ANA 1,114,919,783.60 ... 3,270,000,000.87 2,347,500,000.63
1 DEN 55,200,000.00 ... 125,100,000.00 85,350,000.00
2 SIS 4,425,000,000.00 ... 11,857,500,000.00 9,315,000,000.00
3 TRK 1,692,579,200.00 ... 4,375,000,000.00 3,587,500,000.00
[4 rows x 31 columns]
2. Processing All the Scripts
You can process all your scripts using the custom-define function process_scripts(). The code is given below.
## Define CSV file-output folder-path
OUTPUT_PATH = './output'
## Process scripts
dfs = process_scripts(scripts,
output_path = OUTPUT_PATH,
save_to_csv = True,
verbose = 0)
## To clear the output dir-contents
#!rm -f $OUTPUT_PATH/*
I did this on Google Colab and it worked as expected.
3. Making Paths in OS-agnostic Manner
Making paths for windows or unix based systems could be very different. The following shows you a method to achieve that without having to worry about which OS you will run the code. I have used the os library here. However, I would suggest you to look at the Pathlib library as well.
# Define relative path for output-folder
OUTPUT_PATH = './output'
# Dynamically define absolute path
pwd = os.getcwd() # present-working-directory
OUTPUT_PATH = os.path.join(pwd, os.path.abspath(OUTPUT_PATH))
4. Code: custom function process_scripts()
Here we use the regex (regular expression) library, along with pandas for organizing the data in a tabular format and then writing to csv file. The tqdm library is used to give you a nice progressbar while processing multiple scripts. Please see the comments in the code to know what to do if you are running it not from a jupyter notebook. The os library is used for path manipulation and creation of output-directory.
#pip install -U pandas
#pip install tqdm
import pandas as pd
import re # regex
import os
from tqdm.notebook import tqdm
# Use the following line if not using a jupyter notebook
# from tqdm import tqdm
def process_scripts(scripts,
output_path='./output',
save_to_csv: bool=False,
verbose: int=0):
"""Process all scripts and return a list of dataframes and
optionally write each dataframe to a CSV file.
Parameters
----------
scripts: list of scripts
output_path (str): output-folder-path for csv files
save_to_csv (bool): default is False
verbose (int): prints output for verbose>0
Example
-------
OUTPUT_PATH = './output'
dfs = process_scripts(scripts,
output_path = OUTPUT_PATH,
save_to_csv = True,
verbose = 0)
## To clear the output dir-contents
#!rm -f $OUTPUT_PATH/*
"""
## Define regex patterns and compile for speed
pat_header = re.compile(r"theCols\[\d+\] = new Array\s*\([\'](\d{4}\.\d{1,2})[\'],\d+,\d+\)")
pat_line = re.compile(r"theRows\[\d+\] = new Array\s*\((.*)\).*")
pat_code = re.compile("([A-Z]{3})")
# Calculate zfill-digits
zfill_digits = len(str(len(scripts)))
print(f'Total scripts: {len(scripts)}')
# Create output_path
if not os.path.exists(output_path):
os.makedirs(output_path)
# Define a list of dataframes:
# An accumulator of all scripts
dfs = []
## If you do not have tqdm installed, uncomment the
# next line and comment out the following line.
#for script_num, script in enumerate(scripts):
for script_num, script in enumerate(tqdm(scripts, desc='Scripts Processed')):
## Extract: Headers, Rows
# Rows : code (Name: single column), line-data (multi-column)
headers = script.strip().split('\n', 0)[0]
headers = ['Name'] + re.findall(pat_header, headers)
lines = re.findall(pat_line, script)
codes = [re.findall(pat_code, line)[0] for line in lines]
# Clean data for each row
lines_data = dict()
for line, code in zip(lines, codes):
line_data = line.replace("','", "|").split('|')
line_data[-1] = line_data[-1].replace("'", "")
line_data[0] = code
lines_data.update({code: line_data.copy()})
if verbose>0:
print('{}: {}'.format(script_num, codes))
## Load data into a pandas-dataframe
# and write to csv.
df = pd.DataFrame(lines_data).T
df.columns = headers
dfs.append(df.copy()) # update list
# Write to CSV
if save_to_csv:
num_label = str(script_num).zfill(zfill_digits)
script_file_csv = f'Script_{num_label}.csv'
script_path = os.path.join(output_path, script_file_csv)
df.to_csv(script_path, index=False)
return dfs
5. Dummy Data
## Dummy Data
s = """
<script language="JavaScript"> var theHlp='/yardim/matris.asp';var theTitle = 'Piyasa Değeri';var theCaption='Cam (TL)';var lastmod = '';var h='<a class=hisselink href=../Hisse/HisseAnaliz.aspx?HNO=';var e='<a class=hisselink href=../endeks/endeksAnaliz.aspx?HNO=';var d='<center><font face=symbol size=1 color=#FF0000><b>ß</b></font></center>';var u='<center><font face=symbol size=1 color=#008000><b>İ</b></font></center>';var n='<center><font face=symbol size=1 color=#00A000><b>=</b></font></center>';var fr='<font color=#FF0000>';var fg='<font color=#008000>';var theFooter=new Array();var theCols = new Array();theCols[0] = new Array('Hisse',4,50);theCols[1] = new Array('2012.12',1,60);theCols[2] = new Array('2013.03',1,60);theCols[3] = new Array('2013.06',1,60);theCols[4] = new Array('2013.09',1,60);theCols[5] = new Array('2013.12',1,60);theCols[6] = new Array('2014.03',1,60);theCols[7] = new Array('2014.06',1,60);theCols[8] = new Array('2014.09',1,60);theCols[9] = new Array('2014.12',1,60);theCols[10] = new Array('2015.03',1,60);theCols[11] = new Array('2015.06',1,60);theCols[12] = new Array('2015.09',1,60);theCols[13] = new Array('2015.12',1,60);theCols[14] = new Array('2016.03',1,60);theCols[15] = new Array('2016.06',1,60);theCols[16] = new Array('2016.09',1,60);theCols[17] = new Array('2016.12',1,60);theCols[18] = new Array('2017.03',1,60);theCols[19] = new Array('2017.06',1,60);theCols[20] = new Array('2017.09',1,60);theCols[21] = new Array('2017.12',1,60);theCols[22] = new Array('2018.03',1,60);theCols[23] = new Array('2018.06',1,60);theCols[24] = new Array('2018.09',1,60);theCols[25] = new Array('2018.12',1,60);theCols[26] = new Array('2019.03',1,60);theCols[27] = new Array('2019.06',1,60);theCols[28] = new Array('2019.09',1,60);theCols[29] = new Array('2019.12',1,60);theCols[30] = new Array('2020.03',1,60);var theRows = new Array();
theRows[0] = new Array ('<b>'+h+'30>ANA</B></a>','1,114,919,783.60','1,142,792,778.19','1,091,028,645.38','991,850,000.48','796,800,000.38','697,200,000.34','751,150,000.36','723,720,000.33','888,000,000.40','790,320,000.36','883,560,000.40','927,960,000.42','737,040,000.33','879,120,000.40','914,640,000.41','927,960,000.42','1,172,160,000.53','1,416,360,000.64','1,589,520,000.72','1,552,500,000.41','1,972,500,000.53','2,520,000,000.67','2,160,000,000.58','2,475,000,000.66','2,010,000,000.54','2,250,000,000.60','2,077,500,000.55','2,332,500,000.62','3,270,000,000.87','2,347,500,000.63');
theRows[1] = new Array ('<b>'+h+'89>DEN</B></a>','55,200,000.00','55,920,000.00','45,960,000.00','42,600,000.00','35,760,000.00','39,600,000.00','40,200,000.00','47,700,000.00','50,460,000.00','45,300,000.00','41,760,000.00','59,340,000.00','66,600,000.00','97,020,000.00','81,060,000.00','69,300,000.00','79,800,000.00','68,400,000.00','66,900,000.00','66,960,000.00','71,220,000.00','71,520,000.00','71,880,000.00','60,600,000.00','69,120,000.00','62,640,000.00','57,180,000.00','89,850,000.00','125,100,000.00','85,350,000.00');
theRows[2] = new Array ('<b>'+h+'269>SIS</B></a>','4,425,000,000.00','4,695,000,000.00','4,050,000,000.00','4,367,380,000.00','4,273,120,000.00','3,644,720,000.00','4,681,580,000.00','4,913,000,000.00','6,188,000,000.00','5,457,000,000.00','6,137,000,000.00','5,453,000,000.00','6,061,000,000.00','6,954,000,000.00','6,745,000,000.00','6,519,000,000.00','7,851,500,000.00','8,548,500,000.00','9,430,000,000.00','9,225,000,000.00','10,575,000,000.00','11,610,000,000.00','9,517,500,000.00','13,140,000,000.00','12,757,500,000.00','13,117,500,000.00','11,677,500,000.00','10,507,500,000.00','11,857,500,000.00','9,315,000,000.00');
theRows[3] = new Array ('<b>'+h+'297>TRK</B></a>','1,692,579,200.00','1,983,924,800.00','1,831,315,200.00','1,704,000,000.00','1,803,400,000.00','1,498,100,000.00','1,803,400,000.00','1,884,450,000.00','2,542,160,000.00','2,180,050,000.00','2,069,200,000.00','1,682,600,000.00','1,619,950,000.00','1,852,650,000.00','2,040,600,000.00','2,315,700,000.00','2,641,200,000.00','2,938,800,000.00','3,599,100,000.00','4,101,900,000.00','5,220,600,000.00','5,808,200,000.00','4,689,500,000.00','5,375,000,000.00','3,787,500,000.00','4,150,000,000.00','3,662,500,000.00','3,712,500,000.00','4,375,000,000.00','3,587,500,000.00');
var thetable=new mytable();thetable.tableWidth=650;thetable.shownum=false;thetable.controlaccess=true;thetable.visCols=new Array(true,true,true,true,true);thetable.initsort=new Array(0,-1);thetable.inittable();thetable.refreshTable();</script>
"""
## Make a dummy list of scripts
scripts = [s for _ in range(10)]
According to the provided <script> in your question, you can do something like code below to have a list of Dates for each name ANA, DEN ..:
for _ in range(1, len(aaa.split("<b>'"))-1):
s = aaa.split("<b>'")[_].split("'")
print(_)
lst = []
for i in s:
if "</B>" in i:
name = i.split('>')[1].split("<")[0]
print("{} = ".format(name), end="")
if any(j.isdigit() for j in i) and ',' in i:
lst.append(i)
print(lst)
It's just an example code, so it's not that beautiful :)
Hope this will help you.

How to skip erroneous elements at io level in apache beam with Dataflow?

I am doing some analysis on the tfrecords stored in GCP, but some of the tfrecords inside the files are corrupted, so when I run my pipeline and get more than four errors my pipeline breaks due to this . I think this is a constraint of DataFlowRunner and not of beam.
Here is my script of processing
import argparse
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.metrics.metric import Metrics
from apache_beam.runners.direct import direct_runner
import tensorflow as tf
input_ = "path_to_bucket"
def _parse_example(serialized_example):
"""Return inputs and targets Tensors from a serialized tf.Example."""
data_fields = {
"inputs": tf.io.VarLenFeature(tf.int64),
"targets": tf.io.VarLenFeature(tf.int64)
}
parsed = tf.io.parse_single_example(serialized_example, data_fields)
inputs = tf.sparse.to_dense(parsed["inputs"])
targets = tf.sparse.to_dense(parsed["targets"])
return inputs, targets
class MyFnDo(beam.DoFn):
def __init__(self):
beam.DoFn.__init__(self)
self.input_tokens = Metrics.distribution(self.__class__, 'input_tokens')
self.output_tokens = Metrics.distribution(self.__class__, 'output_tokens')
self.num_examples = Metrics.counter(self.__class__, 'num_examples')
self.decode_errors = Metrics.counter(self.__class__, 'decode_errors')
def process(self, element):
# inputs = element.features.feature['inputs'].int64_list.value
# outputs = element.features.feature['outputs'].int64_list.value
try:
inputs, outputs = _parse_example(element)
self.input_tokens.update(len(inputs))
self.output_tokens.update(len(outputs))
self.num_examples.inc()
except Exception:
self.decode_errors.inc()
def main(argv):
parser = argparse.ArgumentParser()
parser.add_argument('--input', dest='input', default=input_, help='input tfrecords')
# parser.add_argument('--output', dest='output', default='gs://', help='output file')
known_args, pipeline_args = parser.parse_known_args(argv)
pipeline_options = PipelineOptions(pipeline_args)
with beam.Pipeline(options=pipeline_options) as p:
tfrecords = p | "Read TFRecords" >> beam.io.ReadFromTFRecord(known_args.input,
coder=beam.coders.ProtoCoder(tf.train.Example))
tfrecords | "count mean" >> beam.ParDo(MyFnDo())
if __name__ == '__main__':
main(None)
so basically how can I skip the corrupted tfrecords and log their numbers while my analysis ?
There was a conceptual issue with it, the beam.io.ReadFromTFRecord reads from the single tfrecords (which could have been shared to multiple files), whereas I was giving the list of many individual tfrecords and hence it was causing the error. Switching to ReadAllFromTFRecord from ReadFromTFRecord resolved my issue.
p = beam.Pipeline(runner=direct_runner.DirectRunner())
tfrecords = p | beam.Create(tf.io.gfile.glob(input_)) | ReadAllFromTFRecord(coder=beam.coders.ProtoCoder(tf.train.Example))
tfrecords | beam.ParDo(MyFnDo())

How do you delete a virtual disk with pyvmomi

I am trying to write a Python program using the pyvmomi library to "erase" a virtual hard drive associated with a VM. The way this is done manually is to remove the virtual disk and create a new virtual disk with the same specs. I am expecting that I will need to do the same thing with pyvmomi so I have started down that path. My issue is that I can use ReconfigVM_Task to remove the virtual drive but that leaves the VMDK file itself.
I originally tried using DeleteVStorageObject_Task (since DeleteVirtualDisk_Task is deprecated) to remove the virtual disk file but that requires the ID of the object (the VMDK file) which I am unable to find anywhere. Theoretically that's available from the VirtualDisk property vDiskId but that is null. In further research it seems to only be populated for first class disks.
So I am instead trying to delete the VMDK file directly using DeleteDatastoreFile_Task but when I do that I end up with a XXXX-flat.vmdk file in the datastore so it seems to not actually delete the file.
Any idea on where I'm going wrong here or how to better do this? The VMWare SDK documentation for pyvmomi is...lacking.
Thanks!
You'll have to perform a ReconfigVM_Task operation. The keypoint for this is that the file operation should be destroy. Here's the raw output from performing the operation in the UI:
spec = vim.vm.ConfigSpec()
spec_deviceChange_0 = vim.vm.device.VirtualDeviceSpec()
spec_deviceChange_0.fileOperation = 'destroy'
spec_deviceChange_0.device = vim.vm.device.VirtualDisk()
spec_deviceChange_0.device.shares = vim.SharesInfo()
spec_deviceChange_0.device.shares.shares = 1000
spec_deviceChange_0.device.shares.level = 'normal'
spec_deviceChange_0.device.capacityInBytes = 8589934592
spec_deviceChange_0.device.storageIOAllocation = vim.StorageResourceManager.IOAllocationInfo()
spec_deviceChange_0.device.storageIOAllocation.shares = vim.SharesInfo()
spec_deviceChange_0.device.storageIOAllocation.shares.shares = 1000
spec_deviceChange_0.device.storageIOAllocation.shares.level = 'normal'
spec_deviceChange_0.device.storageIOAllocation.limit = -1
spec_deviceChange_0.device.storageIOAllocation.reservation = 0
spec_deviceChange_0.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
spec_deviceChange_0.device.backing.backingObjectId = ''
spec_deviceChange_0.device.backing.fileName = '[kruddy_2TB_01] web01/web01_2.vmdk'
spec_deviceChange_0.device.backing.split = False
spec_deviceChange_0.device.backing.writeThrough = False
spec_deviceChange_0.device.backing.datastore = search_index.FindByUuid(None, "datastore-14", True, True)
spec_deviceChange_0.device.backing.eagerlyScrub = True
spec_deviceChange_0.device.backing.contentId = 'e26f44020e7897006bec81b1fffffffe'
spec_deviceChange_0.device.backing.thinProvisioned = False
spec_deviceChange_0.device.backing.diskMode = 'persistent'
spec_deviceChange_0.device.backing.digestEnabled = False
spec_deviceChange_0.device.backing.sharing = 'sharingNone'
spec_deviceChange_0.device.backing.uuid = '6000C292-7895-54ee-a55c-49d0036ef1bb'
spec_deviceChange_0.device.controllerKey = 200
spec_deviceChange_0.device.unitNumber = 0
spec_deviceChange_0.device.nativeUnmanagedLinkedClone = False
spec_deviceChange_0.device.capacityInKB = 8388608
spec_deviceChange_0.device.deviceInfo = vim.Description()
spec_deviceChange_0.device.deviceInfo.summary = '8,388,608 KB'
spec_deviceChange_0.device.deviceInfo.label = 'Hard disk 2'
spec_deviceChange_0.device.diskObjectId = '148-3000'
spec_deviceChange_0.device.key = 3000
spec_deviceChange_0.operation = 'remove'
spec.deviceChange = [spec_deviceChange_0]
spec.cpuFeatureMask = []
managedObject.ReconfigVM_Task(spec)
Kyle Ruddy got me pointed in the right direction. Here's a code snippit showing how I made it work for future people searching for information on how to do this:
#Assuming dev is already set to the vim.vm.device.VirtualDisk you want to delete...
virtual_hdd_spec = vim.vm.device.VirtualDeviceSpec()
virtual_hdd_spec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.destroy
virtual_hdd_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
virtual_hdd_spec.device = dev
spec = vim.vm.ConfigSpec()
spec.deviceChange = [virtual_hdd_spec]
WaitForTask(vm.ReconfigVM_Task(spec=spec))
The API documentation for this is at https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.device.VirtualDeviceSpec.html

Categories