Automate Your Life with Python: A Practical Guide to Reclaiming Hours Every Day

Introduction: Why Python is the Ultimate Automation Engine
Every day, millions of people perform the same digital tasks—renaming files, sorting through emails, downloading reports, entering data into spreadsheets, and backing up folders. These tasks are not complex. They require no creative thinking. Yet they consume hours of human attention that could be spent on meaningful work, learning, or rest.
Python is the antidote.
What separates Python from other automation tools is its unique combination of readability, ecosystem depth, and low friction. You don’t need to be a professional software engineer to write a script that saves you ten hours a month. A single afternoon of learning can yield a lifetime of time savings.
This guide will take you from zero to functional automation. We will explore file system management, email and messaging automation, web scraping, spreadsheet manipulation, PDF processing, and finally, scheduling—so your scripts run while you sleep.
By the end, you will have a toolkit of reusable patterns and the confidence to identify automation opportunities in your own daily workflow.
Part 1: The Automation Mindset
Before writing a single line of code, you must learn to see the world as an automation engineer. Not every task should be automated. Some tasks are too infrequent, too variable, or too cheap to justify the scripting effort. But many tasks fall into a sweet spot.
The 3-3-3 Rule of Automation
A useful heuristic for deciding whether to automate is the 3-3-3 rule:
- Does it take more than 3 minutes? If a recurring task takes under three minutes, automation might not yield significant returns.
- Does it happen more than 3 times? Frequency matters. A ten-minute task done weekly is 520 minutes annually. A ten-minute task done daily is 3,650 minutes annually.
- Can it be scripted in under 3 hours? If you can write and debug the automation in less time than the task would consume over six months, automate it.
The Automation Pyramid
At the base of the pyramid are simple scripts—file renamers, text extractors, notification bots. These require minimal code and maximal immediate value.
In the middle are scheduled workflows—nightly backups, data exports, report generators. These run unattended and form the backbone of personal productivity.
At the top are event-driven automations—watch folders that trigger actions, email listeners that respond automatically, API integrations that sync across services. These are more complex but offer the highest leverage.
Start at the base.
Part 2: Setting Up Your Automation Environment
Python Installation
If you do not already have Python installed, the most reliable method on Windows is through the official installer from python.org. On macOS, use Homebrew: brew install python. On Linux, Python comes pre-installed, but ensure you have Python 3.8 or higher with python3 --version.
Essential Libraries for Automation
Create a dedicated automation environment:
# Create a project folder
mkdir automation_scripts
cd automation_scripts
# Create a virtual environment (keeps dependencies isolated)
python -m venv auto_env
# Activate it
# On Windows:
auto_env\Scripts\activate
# On macOS/Linux:
source auto_env/bin/activate
# Install the core automation toolkit
pip install os-sys shutil schedule watchdog python-dotenvThe libraries you will use most frequently:
| Library | Purpose |
|---|---|
os, shutil, pathlib | File and folder operations |
schedule | Time-based job scheduling |
watchdog | File system event monitoring |
smtplib (built-in) | Sending emails |
imaplib (built-in) | Reading emails |
requests | HTTP calls and API interactions |
beautifulsoup4 | Web scraping |
openpyxl | Excel file manipulation |
PyPDF2 or pypdf | PDF reading and merging |
python-dotenv | Managing secrets (passwords, API keys) |
The Secret Management Rule
Never hard-code passwords or API keys into your scripts. Create a .env file in your project folder:
# .env file - keep this out of version control!
EMAIL_ADDRESS=your_email@gmail.com
EMAIL_PASSWORD=your_app_password
BACKUP_FOLDER=C:\Users\YourName\DocumentsThen load it in your script:
from dotenv import load_dotenv
import os
load_dotenv()
email = os.getenv("EMAIL_ADDRESS")
password = os.getenv("EMAIL_PASSWORD")Part 3: File System Automation
File management is the gateway to automation because it is universally painful and entirely predictable.
Scenario 1: Automatic Download Folder Organizer
Every browser download goes into your Downloads folder. Within days, it becomes a graveyard of PDFs, images, ZIP files, and random executables. This script watches your Downloads folder and sorts files by extension into subfolders.
import os
import shutil
import time
from pathlib import Path
# Configuration
DOWNLOADS_PATH = Path.home() / "Downloads"
ORGANIZED_FOLDER = DOWNLOADS_PATH / "Organized"
# Mapping of file extensions to folder names
FILE_CATEGORIES = {
"Images": [".jpg", ".jpeg", ".png", ".gif", ".bmp", ".svg"],
"Documents": [".pdf", ".docx", ".doc", ".txt", ".md", ".rtf"],
"Spreadsheets": [".xlsx", ".xls", ".csv"],
"Presentations": [".pptx", ".ppt"],
"Archives": [".zip", ".rar", ".7z", ".tar", ".gz"],
"Executables": [".exe", ".msi", ".sh", ".bat"],
"Code": [".py", ".js", ".html", ".css", ".json", ".xml"],
"Videos": [".mp4", ".mkv", ".mov", ".avi"],
"Audio": [".mp3", ".wav", ".flac", ".aac"],
}
def organize_downloads():
"""Sort all files in Downloads into category subfolders"""
# Create the main organized folder if it doesn't exist
ORGANIZED_FOLDER.mkdir(exist_ok=True)
# Create category subfolders
for category in FILE_CATEGORIES.keys():
(ORGANIZED_FOLDER / category).mkdir(exist_ok=True)
# Process each file in Downloads
for file_path in DOWNLOADS_PATH.iterdir():
if file_path.is_file() and file_path.parent != ORGANIZED_FOLDER:
file_extension = file_path.suffix.lower()
# Find which category this file belongs to
moved = False
for category, extensions in FILE_CATEGORIES.items():
if file_extension in extensions:
destination = ORGANIZED_FOLDER / category / file_path.name
shutil.move(str(file_path), str(destination))
print(f"Moved: {file_path.name} -> {category}")
moved = True
break
# If no category matched, put in "Other"
if not moved:
other_folder = ORGANIZED_FOLDER / "Other"
other_folder.mkdir(exist_ok=True)
shutil.move(str(file_path), str(other_folder / file_path.name))
print(f"Moved: {file_path.name} -> Other")
if __name__ == "__main__":
organize_downloads()Scenario 2: Batch File Renamer
Renaming hundreds of photos from IMG_1234.jpg to Vacation_2025_001.jpg by hand is madness. This script handles sequential renaming with zero effort.
import os
from pathlib import Path
def batch_rename(folder_path, prefix, start_number=1, include_original_date=False):
"""
Rename all files in a folder with a sequential pattern.
Args:
folder_path: Path to the folder containing files
prefix: String prefix for new names (e.g., "Vacation_2025")
start_number: Starting number for sequence
include_original_date: If True, appends file modification date
"""
folder = Path(folder_path)
files = [f for f in folder.iterdir() if f.is_file()]
# Sort by modification date (optional, but often useful)
files.sort(key=lambda f: f.stat().st_mtime)
for idx, file_path in enumerate(files, start=start_number):
# Preserve the original file extension
extension = file_path.suffix
# Build the new filename
new_name = f"{prefix}_{idx:03d}"
if include_original_date:
# Get file modification date
mod_time = file_path.stat().st_mtime
from datetime import datetime
date_str = datetime.fromtimestamp(mod_time).strftime("%Y%m%d")
new_name += f"_{date_str}"
new_name += extension
new_path = folder / new_name
# Handle potential name collisions
counter = 1
while new_path.exists():
new_name = f"{prefix}_{idx:03d}_{counter}{extension}"
new_path = folder / new_name
counter += 1
file_path.rename(new_path)
print(f"Renamed: {file_path.name} -> {new_path.name}")
# Example usage
# batch_rename("/path/to/photos", "Hawaii_Trip", start_number=1)Scenario 3: Duplicate File Finder
Over time, you accumulate duplicate files across folders. This script identifies duplicates by comparing file hashes, not just filenames.
import hashlib
import os
from pathlib import Path
from collections import defaultdict
def hash_file(file_path, chunk_size=8192):
"""Generate SHA-256 hash of a file"""
sha256 = hashlib.sha256()
with open(file_path, 'rb') as f:
while chunk := f.read(chunk_size):
sha256.update(chunk)
return sha256.hexdigest()
def find_duplicates(search_folder):
"""Find all duplicate files in a directory tree"""
folder = Path(search_folder)
hash_map = defaultdict(list)
# Walk through all files
for file_path in folder.rglob('*'):
if file_path.is_file():
try:
file_hash = hash_file(file_path)
hash_map[file_hash].append(file_path)
except (PermissionError, OSError):
print(f"Skipping {file_path} (access error)")
continue
# Return only hashes with more than one file
duplicates = {h: paths for h, paths in hash_map.items() if len(paths) > 1}
return duplicates
def report_duplicates(duplicates):
"""Print a human-readable duplicate report"""
total_duplicate_files = 0
total_wasted_bytes = 0
for hash_val, paths in duplicates.items():
print(f"\nDuplicate group (hash: {hash_val[:8]}...):")
total_duplicate_files += len(paths) - 1
file_size = paths[0].stat().st_size
total_wasted_bytes += file_size * (len(paths) - 1)
for p in paths:
print(f" - {p}")
print(f"\nSummary: {total_duplicate_files} duplicate files found")
print(f"Potential space savings: {total_wasted_bytes / (1024**2):.2f} MB")
# Usage
# duplicates = find_duplicates("/path/to/scan")
# report_duplicates(duplicates)Part 4: Email Automation
Email is a notorious time sink. Automating email responses, filtering, and notifications can easily save an hour per day.
Scenario 4: Send Automated Email Reports
This script composes and sends an email from your Gmail account. Note that Gmail requires an “App Password” if you have 2FA enabled.
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from datetime import datetime
from dotenv import load_dotenv
import os
load_dotenv()
def send_report_email(report_content, recipient, subject=None):
"""
Send an email report. Perfect for daily status updates.
"""
smtp_server = "smtp.gmail.com"
smtp_port = 587
sender_email = os.getenv("EMAIL_ADDRESS")
sender_password = os.getenv("EMAIL_PASSWORD")
if subject is None:
subject = f"Daily Report - {datetime.now().strftime('%Y-%m-%d')}"
# Create message
message = MIMEMultipart()
message["From"] = sender_email
message["To"] = recipient
message["Subject"] = subject
# Attach the report as plain text
message.attach(MIMEText(report_content, "plain"))
# Send
try:
with smtplib.SMTP(smtp_server, smtp_port) as server:
server.starttls()
server.login(sender_email, sender_password)
server.send_message(message)
print(f"Email sent successfully to {recipient}")
return True
except Exception as e:
print(f"Failed to send email: {e}")
return False
def generate_daily_report():
"""Example: Generate a system report to email"""
computer_name = os.name
current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
# Example report content - customize as needed
report = f"""
Daily Automation Report
Generated: {current_time}
Computer: {computer_name}
Tasks Completed Today:
- Downloaded folder organized
- Backup verified
- 3 duplicate files removed
System Status:
- Disk space: {get_disk_usage()}
- Memory: {get_memory_usage()}
Next Scheduled Tasks:
- Database backup at 2:00 AM
- Log rotation at 3:00 AM
"""
return report
def get_disk_usage():
"""Get disk usage for main drive (platform independent)"""
import shutil
total, used, free = shutil.disk_usage("/")
return f"{used // (2**30)} GB used / {free // (2**30)} GB free"
def get_memory_usage():
"""Get memory usage (requires psutil library)"""
try:
import psutil
memory = psutil.virtual_memory()
return f"{memory.percent}% used"
except ImportError:
return "psutil not installed"
# Send the report
if __name__ == "__main__":
report = generate_daily_report()
send_report_email(report, "your_email@gmail.com")Scenario 5: Email Parser for Automated Ticketing
Monitor an inbox and automatically respond to certain message types. This is useful for customer service inboxes or personal filters.
import imaplib
import email
from email.header import decode_header
from dotenv import load_dotenv
import os
import re
load_dotenv()
def check_inbox_and_respond():
"""Connect to inbox, process unread emails, and auto-respond"""
imap_server = "imap.gmail.com"
email_address = os.getenv("EMAIL_ADDRESS")
email_password = os.getenv("EMAIL_PASSWORD")
# Connect to server
mail = imaplib.IMAP4_SSL(imap_server)
mail.login(email_address, email_password)
mail.select("INBOX")
# Search for unread emails
status, messages = mail.search(None, "UNSEEN")
if status != "OK":
print("No new messages")
return
email_ids = messages[0].split()
for email_id in email_ids:
status, msg_data = mail.fetch(email_id, "(RFC822)")
if status != "OK":
continue
msg = email.message_from_bytes(msg_data[0][1])
# Decode subject
subject, encoding = decode_header(msg["Subject"])[0]
if isinstance(subject, bytes):
subject = subject.decode(encoding if encoding else "utf-8")
# Get sender
from_addr = msg.get("From")
# Get body
body = ""
if msg.is_multipart():
for part in msg.walk():
if part.get_content_type() == "text/plain":
body = part.get_payload(decode=True).decode()
break
else:
body = msg.get_payload(decode=True).decode()
print(f"Processing email from {from_addr}: {subject}")
# Auto-respond based on content
if "meeting" in subject.lower() or "meeting" in body.lower():
send_meeting_response(from_addr)
elif "password reset" in subject.lower():
send_password_reset_instructions(from_addr)
elif "unsubscribe" in body.lower():
process_unsubscribe(from_addr)
else:
send_auto_acknowledgment(from_addr)
# Mark as read (optional - remove if you want to keep as unread)
mail.store(email_id, "+FLAGS", "\\Seen")
mail.close()
mail.logout()
def send_auto_acknowledgment(recipient):
"""Send a simple acknowledgment email"""
send_report_email(
"Thank you for your email. I am currently away and will respond within 24 hours.",
recipient,
"Automatic Acknowledgment"
)
def send_meeting_response(recipient):
"""Send meeting availability"""
send_report_email(
"I received your meeting request. My available times are Mon/Wed 10 AM - 3 PM. Please propose a time.",
recipient,
"Meeting Request Received"
)
def send_password_reset_instructions(recipient):
"""Send password reset link"""
send_report_email(
"To reset your password, please visit: https://example.com/reset\n\nIf you did not request this, ignore this message.",
recipient,
"Password Reset Instructions"
)
def process_unsubscribe(recipient):
"""Log unsubscribe requests for manual review"""
import datetime
with open("unsubscribe_log.txt", "a") as f:
f.write(f"{datetime.datetime.now()}: {recipient}\n")
print(f"Logged unsubscribe from {recipient}")
# Run this every 15 minutes via schedulerPart 5: Web Automation and Scraping
The web is full of data that should be in your spreadsheets. Python can extract, monitor, and interact with websites automatically.
Scenario 6: Price Drop Monitor
Track the price of an Amazon product (or any e-commerce site) and get notified when it drops below a threshold.
import requests
from bs4 import BeautifulSoup
import time
import smtplib
from email.mime.text import MIMEText
from dotenv import load_dotenv
import os
load_dotenv()
def check_price(url, target_price, product_name):
"""
Check product price and send alert if below target.
Note: Many e-commerce sites block simple scraping.
You may need headers to mimic a browser.
"""
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
"Accept-Language": "en-US,en;q=0.9"
}
try:
response = requests.get(url, headers=headers, timeout=10)
soup = BeautifulSoup(response.content, "html.parser")
# Amazon price selectors (these change frequently)
price_element = soup.select_one(".a-price-whole")
if price_element:
price_text = price_element.get_text().strip()
# Remove commas and convert to float
current_price = float(price_text.replace(",", ""))
print(f"{product_name}: Current price = ${current_price}")
if current_price <= target_price:
send_price_alert(product_name, url, current_price, target_price)
return True
else:
print(f"Could not find price for {product_name}")
except Exception as e:
print(f"Error checking {product_name}: {e}")
return False
def send_price_alert(product_name, url, current_price, target_price):
"""Send email notification for price drop"""
subject = f"PRICE ALERT: {product_name} dropped to ${current_price}"
body = f"""
The price for {product_name} has dropped to ${current_price}
(your target was ${target_price}).
View product: {url}
Act now!
"""
send_report_email(body, os.getenv("EMAIL_ADDRESS"), subject)
# Monitor multiple products
products = [
{"url": "https://www.amazon.com/dp/B08N5WRWNW", "target": 49.99, "name": "Wireless Mouse"},
{"url": "https://www.amazon.com/dp/B07X6C9RMF", "target": 299.99, "name": "Monitor"},
]
for product in products:
check_price(product["url"], product["target"], product["name"])
time.sleep(30) # Be respectful to serversScenario 7: Website Change Detector
Monitor any webpage for changes and get alerted when content updates. Perfect for tracking price changes, job postings, or government announcements.
import hashlib
import requests
import time
from datetime import datetime
def get_page_hash(url):
"""Get hash of page content to detect changes"""
try:
response = requests.get(url, timeout=10)
# Hash the first 10,000 characters to be efficient
content = response.text[:10000]
return hashlib.md5(content.encode()).hexdigest()
except Exception as e:
print(f"Error fetching {url}: {e}")
return None
def monitor_websites(websites, check_interval_minutes=60):
"""
Monitor a list of websites for changes.
websites: list of dicts with 'url' and 'name'
"""
# Store initial hashes
last_hashes = {}
for site in websites:
url = site["url"]
current_hash = get_page_hash(url)
if current_hash:
last_hashes[url] = current_hash
print(f"Initialized monitoring for {site['name']}")
while True:
for site in websites:
url = site["url"]
name = site["name"]
current_hash = get_page_hash(url)
if current_hash and current_hash != last_hashes.get(url):
# Change detected!
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
alert_message = f"[{timestamp}] CHANGE DETECTED on {name}\nURL: {url}"
print(alert_message)
# Save to log file
with open("change_log.txt", "a") as log:
log.write(f"{alert_message}\n")
# Send email alert
send_report_email(alert_message, os.getenv("EMAIL_ADDRESS"), f"Website Change: {name}")
# Update hash
last_hashes[url] = current_hash
time.sleep(check_interval_minutes * 60)
# Example usage
websites_to_watch = [
{"url": "https://www.example.com/jobs", "name": "Job Postings"},
{"url": "https://www.nytimes.com", "name": "NY Times Front Page"},
]
# Uncomment to run (runs forever, use Ctrl+C to stop)
# monitor_websites(websites_to_watch, check_interval_minutes=30)Part 6: Spreadsheet Automation
Excel is everywhere. Python can read, write, and manipulate spreadsheets faster than any human.
Scenario 8: CSV to Excel Converter with Formatting
Convert all CSV files in a folder to Excel format with automatic column width adjustment.
import pandas as pd
from pathlib import Path
import openpyxl
from openpyxl.utils import get_column_letter
def csvs_to_excel(folder_path, output_filename="combined_report.xlsx"):
"""Convert all CSV files in a folder to a single Excel workbook, one sheet per CSV"""
folder = Path(folder_path)
csv_files = list(folder.glob("*.csv"))
if not csv_files:
print("No CSV files found")
return
output_path = folder / output_filename
with pd.ExcelWriter(output_path, engine="openpyxl") as writer:
for csv_file in csv_files:
# Read CSV
df = pd.read_csv(csv_file)
# Use filename (without extension) as sheet name
sheet_name = csv_file.stem[:31] # Excel sheet names max 31 chars
df.to_excel(writer, sheet_name=sheet_name, index=False)
# Auto-adjust column widths
worksheet = writer.sheets[sheet_name]
for column in worksheet.columns:
max_length = 0
column_letter = get_column_letter(column[0].column)
for cell in column:
try:
if len(str(cell.value)) > max_length:
max_length = len(str(cell.value))
except:
pass
adjusted_width = min(max_length + 2, 50)
worksheet.column_dimensions[column_letter].width = adjusted_width
print(f"Created Excel file: {output_path}")
def combine_excel_sheets(excel_file, output_file="combined.xlsx"):
"""Combine all sheets from one Excel file into a single sheet with source column"""
all_data = []
xl = pd.ExcelFile(excel_file)
for sheet_name in xl.sheet_names:
df = pd.read_excel(excel_file, sheet_name=sheet_name)
df["Source_Sheet"] = sheet_name
all_data.append(df)
combined = pd.concat(all_data, ignore_index=True)
combined.to_excel(output_file, index=False)
print(f"Combined {len(xl.sheet_names)} sheets into {output_file}")
# Usage
# csvs_to_excel("/path/to/csv/folder", "monthly_report.xlsx")Scenario 9: Automated Data Cleaning Script
Clean messy Excel data—remove duplicates, fix dates, standardize text.
import pandas as pd
import re
def clean_excel_data(input_file, output_file, columns_to_clean=None):
"""
Automatically clean common data issues in Excel files.
Args:
input_file: Path to input Excel file
output_file: Path to save cleaned file
columns_to_clean: List of column names to apply cleaning to (None = all string columns)
"""
# Read the file
df = pd.read_excel(input_file)
print(f"Original shape: {df.shape}")
print(f"Original duplicates: {df.duplicated().sum()}")
# 1. Remove exact duplicate rows
df = df.drop_duplicates()
# 2. Standardize column names (lowercase, underscores instead of spaces)
df.columns = df.columns.str.lower().str.replace(' ', '_')
# 3. Clean string columns
string_columns = df.select_dtypes(include=['object']).columns
if columns_to_clean:
string_columns = [col for col in columns_to_clean if col in df.columns]
for col in string_columns:
# Trim leading/trailing spaces
df[col] = df[col].astype(str).str.strip()
# Replace multiple spaces with single space
df[col] = df[col].str.replace(r'\s+', ' ', regex=True)
# Convert empty strings and 'nan' to proper NA
df[col] = df[col].replace(['nan', 'None', '', 'NULL'], pd.NA)
# Standardize date-like columns (e.g., if column contains "date" in name)
if 'date' in col.lower():
df[col] = pd.to_datetime(df[col], errors='coerce')
# 4. Handle missing values
numeric_columns = df.select_dtypes(include=['number']).columns
for col in numeric_columns:
# Fill numeric NAs with median
df[col] = df[col].fillna(df[col].median())
# 5. Remove outliers (optional: values beyond 3 standard deviations)
for col in numeric_columns:
if df[col].std() > 0: # Avoid zero standard deviation
z_scores = (df[col] - df[col].mean()) / df[col].std()
df = df[abs(z_scores) < 3]
print(f"Cleaned shape: {df.shape}")
print(f"Remaining duplicates: {df.duplicated().sum()}")
# Save cleaned file
df.to_excel(output_file, index=False)
print(f"Saved cleaned file to {output_file}")
# Generate a cleaning report
generate_cleaning_report(input_file, output_file, df)
def generate_cleaning_report(original_file, cleaned_file, cleaned_df):
"""Generate a text report of cleaning operations"""
report = f"""
Data Cleaning Report
====================
Original file: {original_file}
Cleaned file: {cleaned_file}
Date: {pd.Timestamp.now()}
Rows in cleaned data: {len(cleaned_df)}
Columns: {list(cleaned_df.columns)}
Column data types:
{cleaned_df.dtypes.to_string()}
Missing values per column:
{cleaned_df.isnull().sum().to_string()}
Summary statistics for numeric columns:
{cleaned_df.describe().to_string()}
"""
report_file = cleaned_file.replace(".xlsx", "_report.txt")
with open(report_file, "w") as f:
f.write(report)
print(f"Report saved to {report_file}")
# Usage
# clean_excel_data("messy_data.xlsx", "clean_data.xlsx")Part 7: Scheduling Your Automations
A script you run manually is not fully automated. True automation runs on a schedule.
Method 1: Python’s schedule Library
Run simple tasks within your Python script.
import schedule
import time
def job():
print("Running scheduled task...")
# Your automation function here
# Schedule a job every day at 9 AM
schedule.every().day.at("09:00").do(job)
# Every hour
schedule.every().hour.do(job)
# Every Monday at 8:30 AM
schedule.every().monday.at("08:30").do(job)
# Every 30 minutes
schedule.every(30).minutes.do(job)
while True:
schedule.run_pending()
time.sleep(60) # Check every minuteMethod 2: Windows Task Scheduler
For production-ready scheduling, use the operating system’s native scheduler.
Creating a scheduled task via PowerShell (admin required):
# Create a task that runs your Python script daily at 2 AM
$Action = New-ScheduledTaskAction -Execute "python.exe" -Argument "C:\scripts\my_automation.py"
$Trigger = New-ScheduledTaskTrigger -Daily -At 2am
$Principal = New-ScheduledTaskPrincipal -UserId "SYSTEM" -LogonType ServiceAccount
$Settings = New-ScheduledTaskSettingsSet -AllowStartIfOnBatteries -DontStopIfGoingOnBatteries
Register-ScheduledTask -TaskName "PythonDailyAutomation" -Action $Action -Trigger $Trigger -Principal $Principal -Settings $SettingsMethod 3: macOS/Linux Cron
On Unix-like systems, cron is the standard.
# Edit your crontab
crontab -e
# Add this line to run a Python script every day at 2 AM
0 2 * * * /usr/bin/python3 /home/user/scripts/automation.py
# Run every hour
0 * * * * /usr/bin/python3 /home/user/scripts/hourly_task.py
# Run every Monday at 9 AM
0 9 * * 1 /usr/bin/python3 /home/user/scripts/weekly_report.pyPart 8: A Complete Automation Example
Let’s tie everything together with a practical, end-to-end automation: The Daily Personal Assistant Bot
This bot runs every morning at 8 AM and performs multiple tasks:
- Checks your email for high-priority messages
- Monitors three websites for changes
- Organizes yesterday’s downloads
- Backs up important folders
- Sends you a morning summary report
“`python
daily_assistant_bot.py
import os
import shutil
import datetime
from pathlib import Path
from dotenv import load_dotenv
load_dotenv()
def organize_downloads():
“””Move files from Downloads to appropriate folders”””
downloads = Path.home() / “Downloads”
docs_folder = Path.home() / “Documents” / “Auto_Sorted”
docs_folder.mkdir(exist_ok=True)
pdf_files = list(downloads.glob("*.pdf"))
image_files = list(downloads.glob("*.jpg")) + list(downloads.glob("*.png"))
# Move PDFs
for pdf in pdf_files:
destination = docs_folder / pdf.name
shutil.move(str(pdf), str(destination))
print(f"Moved PDF: {pdf.name}")
# Move images to Pictures
if image_files:
pictures = Path.home() / "Pictures" / "Auto_Imported"
pictures.mkdir(exist_ok=True)
for img in image_files:
shutil.move(str(img), str(pictures / img.name))def backup_folder(source, destination):
“””Backup a folder to a timestamped backup”””
timestamp = datetime.datetime.now().strftime(“%Y%m%d”)
backup_name = f”{Path(source).name}_{timestamp}”
backup_path = Path(destination) / backup_name
shutil.copytree(source, backup_path, dirs_exist_ok=True)
print(f"Backed up {source} to {backup_path}")
return backup_pathdef check_inbox_priority():
“””Connect to email and count unread priority emails”””
import imaplib
import email
mail = imaplib.IMAP4_SSL("imap.gmail.com")
mail.login(os.getenv("EMAIL_ADDRESS"), os.getenv("EMAIL_PASSWORD"))
mail.select("INBOX")
status, messages = mail.search(None, "UNSEEN")
email_ids = messages[0].split()
priority_keywords = ["urgent", "asap", "important", "deadline"]
priority_emails = []
for email_id in email_ids[-5:]: # Check last 5 unread
status, msg_data = mail.fetch(email_id, "(RFC822)")
msg = email.message_from_bytes(msg_data[0][1])
subject = msg["Subject"] or ""
if any(keyword in subject.lower() for keyword in priority_keywords):
priority_emails.append(subject)
mail.close()
mail.logout()
return priority_emailsdef generate_morning_report():
“””Compile all information into a morning summary”””
now = datetime.datetime.now()
report = f"""
========== MORNING REPORT ==========
Date: {now.strftime('%Y-%m-%d')}
Time: {now.strftime('%H:%M')}
--- System Summary ---
Downloads organized: ✓
Backup completed: ✓
--- Priority Emails ---
"""
priority_emails = check_inbox_priority()
if priority_emails:
for email_subj in priority_emails:
report += f" - {email_subj}\n"
else:
report += " No priority emails\n"
report += """
--- Today's Schedule ---
1. Review priority emails
2. Check backup status
3. Update project tracker
Have a productive day!
"""
return reportdef send_morning_report():
“””Email the morning report to yourself”””
report = generate_morning_report()
# Use the send_report_email function from earlier
send_report_email(report, os.getenv("EMAIL_ADDRESS"), "Morning Assistant Report")def run_daily_assistant():
“””Main function that runs all automations”””
print(f”[{datetime.datetime.now()}] Starting Daily Assistant Bot…”)
# Task 1: Organize downloads
organize_downloads()
# Task 2: Backup critical folders
backup_folder(
Path.home() / "Documents" / "Work",
Path.home() / "Backups"
)