diff --git a/.gitignore b/.gitignore
deleted file mode 100644
index a9325ad..0000000
--- a/.gitignore
+++ /dev/null
@@ -1,48 +0,0 @@
-# macOS system files
-.DS_Store
-.AppleDouble
-.LSOverride
-Icon
-._*
-
-# Python cache files
-__pycache__/
-*.py[cod]
-*$py.class
-
-# Python virtual environments
-venv/
-env/
-.env/
-.venv/
-
-# IDE specific files
-.idea/
-.vscode/
-*.swp
-*.swo
-
-# Logs and databases
-*.log
-*.sqlite
-*.db
-
-# Distribution / packaging
-dist/
-build/
-*.egg-info/
-
-# Temporary files
-log.txt
-*.tmp
-*.bak
-*.swp
-*~
-
-# Operating System temporary files
-*~
-.fuse_hidden*
-.Trash-*
-.nfs*
-
-mlx_models/
diff --git a/README.md b/README.md
deleted file mode 100644
index 36adfbe..0000000
--- a/README.md
+++ /dev/null
@@ -1,385 +0,0 @@
-# PATH-worthy Scripts πŸ› οΈ
-
-A collection of various scripts I use frequently enough to justify keeping them in my system PATH. 
-
-I haven't written documentation for all of these scripts. I might in time. Find documentation for some of the highlights below.
-
-## Installation
-
-1. Clone and enter repository:
-
-```bash
-git clone https://sij.ai/sij/pathScripts.git
-cd pathScripts
-```
-
-2. Add to your system PATH:
-
-macOS / ZSH:
-```bash
-echo "export PATH=\"\$PATH:$PWD\"" >> ~/.zshrc
-source ~/.zshrc
-```
-
-Linux / Bash:
-```bash
-echo "export PATH=\"\$PATH:$PWD\"" >> ~/.bashrc
-source ~/.bashrc
-```
-
-3. Make scripts executable:
-
-```bash
-chmod +x *
-```
-
----
-
-## πŸ“„ `bates` - PDF Bates Number Tool
-
-Extracts and renames PDFs based on Bates numbers.
-
-### Setup
-```bash
-pip3 install pdfplumber
-# For OCR support:
-pip3 install pytesseract pdf2image
-brew install tesseract poppler  # macOS
-# or
-sudo apt-get install tesseract-ocr poppler-utils  # Debian
-```
-
-### Usage
-```bash
-bates /path/to/folder --prefix "FWS-" --digits 6 --name-prefix "FWS "
-```
-
-### Key Features
-- Extracts Bates numbers from text/scanned PDFs
-- Renames files with number ranges
-- Prepare files for use with my [Bates Source Link](https://sij.ai/sij/DEVONthink/src/branch/main/Bates%20Source%20Link.scpt#) DEVONthink script
-- Preserves original names in Finder comments
-- OCR support for scanned documents
-- Dry-run mode with `--dry-run`
-
-### Options
-- `--prefix`: The Bates number prefix to search for (default: "FWS-")
-- `--digits`: Number of digits after the prefix (default: 6)
-- `--ocr`: Enable OCR for scanned documents
-- `--dry-run`: Test extraction without renaming files
-- `--name-prefix`: Prefix to use when renaming files
-- `--log`: Set logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
-
-### Examples
-```bash
-# Test without making changes
-bates /path/to/pdfs --prefix "FWS-" --digits 6 --dry-run
-
-# Rename files with OCR support
-bates /path/to/pdfs --prefix "FWS-" --digits 6 --name-prefix "FWS " --ocr
-```
-
-### Notes
-- Always test with `--dry-run` first
-- Original filenames are preserved in Finder comments (macOS only)
-- OCR is disabled by default to keep things fast
-
----
-
-## πŸͺ `camel` - File Renaming Utility
-
-Renames files in the current directory by splitting camelCase, PascalCase, and other compound words into readable, spaced formats.
-
-### Features
-
-- **Smart Splitting**:
-  - Handles camelCase, PascalCase, underscores (`_`), hyphens (`-`), and spaces.
-  - Preserves file extensions.
-  - Splits on capital letters and numbers intelligently.
-- **Word Detection**:
-  - Uses NLTK’s English word corpus and WordNet to identify valid words.
-  - Common words like "and", "the", "of" are always treated as valid.
-- **Automatic Renaming**:
-  - Processes all files in the current directory (ignores hidden files).
-  - Renames files in-place with clear logging.
-
-### Setup
-1. Install dependencies:
-   ```bash
-   pip3 install nltk
-   ```
-2. Download NLTK data:
-   ```bash
-   python3 -m nltk.downloader words wordnet
-   ```
-
-### Usage
-Run the script in the directory containing the files you want to rename:
-```bash
-camel
-```
-
-### Examples
-Before running the script:
-```plaintext
-Anti-OedipusCapitalismandSchizophrenia_ep7.aax
-TheDawnofEverythingANewHistoryofHumanity_ep7.aax
-TheWeirdandtheEerie_ep7.aax
-```
-
-After running the script:
-```plaintext
-Anti Oedipus Capitalism and Schizophrenia ep 7.aax
-The Dawn of Everything A New History of Humanity ep 7.aax
-The Weird and the Eerie ep 7.aax
-```
-
-### Notes
-- Hidden files (starting with `.`) are skipped.
-- If a word isn’t found in the dictionary, it’s left unchanged.
-- File extensions are preserved during renaming.
-
---- 
-
-## πŸ“¦ `deps` - Unified Python Dependency Manager
-
-A single script that analyzes `import` statements in .py files and installs dependencies using mamba/conda or pip.
-
-### Usage
-```bash
-deps <subcommand> ...
-```
-
-#### Subcommands
-
-1. **`ls`**  
-   Analyzes `.py` files for external imports:
-   - Writes PyPI-available packages to `requirements.txt`.
-   - Writes unavailable packages to `missing-packages.txt`.
-
-   **Examples**:
-   ```bash
-   deps ls            # Analyze current directory (no recursion)
-   deps ls -r         # Recursively analyze current directory
-   deps ls src        # Analyze a 'src' folder
-   deps ls -r src     # Recursively analyze 'src'
-   ```
-
-2. **`install`**  
-   Installs Python packages either by analyzing local imports or from explicit arguments.  
-   - **Conda Environment Detection**: If in a conda environment, tries `mamba` (if installed), else `conda`.  
-   - **Fallback** to `pip` if conda tool fails or is unavailable.  
-   - **`--no-conda`**: Skip conda/mamba entirely and go straight to pip.
-
-   **Examples**:
-   ```bash
-   deps install            # Analyze current folder, install discovered packages (no recursion)
-   deps install -r         # Same as above but recursive
-   deps install requests   # Directly install 'requests'
-   deps install script.py  # Analyze and install packages from 'script.py'
-   deps install -R requirements.txt  # Install from a requirements file
-   deps install requests --no-conda  # Skip conda/mamba, use pip only
-   ```
-
-### How It Works
-- **Scanning Imports**: Locates `import ...` and `from ... import ...` lines in `.py` files, skipping built-in modules.  
-- **PyPI Check**: Uses `urllib` to confirm package availability at `pypi.org`.  
-- **Requirements & Missing Packages**: If you run `deps ls`, discovered imports go into `requirements.txt` (available) or `missing-packages.txt` (unavailable).  
-- **Installation**: For `deps install`:
-  - If no extra arguments, it auto-discovers imports in the current directory (optionally with `-r`) and installs only PyPI-available ones.  
-  - If passed packages, `.py` files, or `-R <reqfile>`, it installs those specifically.  
-  - By default, tries conda environment tools first (mamba or conda) if in a conda environment, otherwise pip.  
-
-### Notes
-- If `mamba` or `conda` is available in your environment, `deps install` will prefer that. Otherwise, it uses pip.  
-- You can run `deps ls` repeatedly to keep updating `requirements.txt` and `missing-packages.txt`.
-
----
-
-## πŸ“ `linecount` - Line Counting Tool for Text Files
-
-Recursively counts the total lines in all text files within the current directory, with optional filtering by file extensions.
-
-### Usage
-```bash
-linecount [<extension1> <extension2> ...]
-```
-
-### Examples
-```bash
-linecount            # Count lines in all non-binary files
-linecount .py .sh    # Count lines only in .py and .sh files
-```
-
-### Key Features
-- **Recursive Search**: Processes files in the current directory and all subdirectories.
-- **Binary File Detection**: Automatically skips binary files.
-- **File Extension Filtering**: Optionally count lines in specific file types (case-insensitive).
-- **Quick Stats**: Displays the number of files scanned and total lines.
-
-### Notes
-- If no extensions are provided, all non-binary files are counted.
-- Use absolute or relative paths when running the script in custom environments.
-
---- 
-
-## πŸ”ͺ `murder` - Force-Kill Processes by Name or Port
-
-A utility script to terminate processes by their name or by the port they are listening on:
-- If the argument is **numeric**, the script will terminate all processes listening on the specified port.
-- If the argument is **text**, the script will terminate all processes matching the given name.
-
-### Usage Examples
-```bash
-# Kill all processes listening on port 8080
-sudo murder 8080
-
-# Kill all processes with "node" in their name
-sudo murder node
-```
-
-### Features & Notes
-- Automatically detects whether the input is a **port** or a **process name**.
-- Uses `lsof` to find processes listening on a specified port.
-- Finds processes by name using `ps` and kills them using their process ID (PID).
-- Ignores the `grep` process itself when searching for process names. 
-
-### Notes
-- Requires `sudo` privileges.
-- Use with caution, as it forcefully terminates processes.
-
----
-
-## πŸ”„ `push` & `pull` - Bulk Git Repository Management
-
-Scripts to automate updates and management of multiple Git repositories.
-
-### Setup
-
-1. **Create a Repository List**  
-   Add repository paths to `~/.repos.txt`, one per line:
-   ```plaintext
-   ~/sijapi
-   ~/workshop/Nova/Themes/Neonva/neonva.novaextension
-   ~/scripts/pathScripts
-   ~/scripts/Swiftbar
-   ```
-
-   - Use `~` for home directory paths or replace it with absolute paths.
-   - Empty lines and lines starting with `#` are ignored.
-
-2. **Make Scripts Executable**  
-   ```bash
-   chmod +x push pull
-   ```
-
-3. **Run the Scripts**  
-   ```bash
-   pull    # Pulls the latest changes from all repositories
-   push    # Pulls, stages, commits, and pushes local changes
-   ```
-
-### Features
-
-#### `pull`
-- Recursively pulls the latest changes from all repositories listed in `~/.repos.txt`.
-- Automatically expands `~` to the home directory.
-- Skips directories that do not exist or are not Git repositories.
-- Uses `git pull --force` to ensure synchronization.
-
-#### `push`
-- Pulls the latest changes from the current branch.
-- Stages and commits all local changes with an auto-generated message: `Auto-update: <timestamp>`.
-- Pushes updates to the current branch.
-- Configures the `origin` remote automatically if missing, using a URL based on the directory name.
-
-### Notes
-- Both scripts assume `~/.repos.txt` is the repository list file. You can update the `REPOS_FILE` variable if needed.
-- Use absolute paths or ensure `~` is correctly expanded to avoid issues.
-- The scripts skip non-existent directories and invalid Git repositories.
-- `push` will attempt to set the `origin` remote automatically if it is missing.
-
----
-
-## 🌐 `vitals` - System and VPN Diagnostics
-
-The `vitals` script provides detailed system diagnostics, VPN status, DNS configuration, and uptime in JSON format. It integrates with tools like AdGuard Home, NextDNS, and Tailscale for network monitoring.
-
-### Usage
-1. **Set up a DNS rewrite rule in AdGuard Home**:
-   - Assign the domain `check.adguard.test` to your Tailscale IP or any custom domain.
-   - Update the `adguard_test_domain` variable in the script if using a different domain.
-
-2. **Run the script**:
-   ```bash
-   vitals
-   ```
-
-   Example output (JSON):
-   ```json
-   {
-       "local_ip": "192.168.1.2",
-       "wan_connected": true,
-       "wan_ip": "185.213.155.74",
-       "has_tailscale": true,
-       "tailscale_ip": "100.100.100.1",
-       "mullvad_exitnode": true,
-       "mullvad_hostname": "de-ber-wg-001.mullvad.ts.net",
-       "nextdns_connected": true,
-       "nextdns_protocol": "DoH",
-       "adguard_connected": true,
-       "uptime": "up 3 days, 2 hours, 15 minutes"
-   }
-   ```
-
---- 
-
-## πŸ”’ `vpn` - Tailscale Exit Node Manager
-
-Privacy-focused Tailscale exit node management with automated logging.
-
-### Setup
-```bash
-pip3 install requests
-```
-
-### Usage
-```bash
-vpn <action> [<country>]  # Actions: start, stop, new, shh, to, status
-```
-
-### Actions
-- **`start`**: Connect to a suggested exit node if not already connected.
-- **`stop`**: Disconnect from the current exit node.
-- **`new`**: Switch to a new suggested exit node.
-- **`shh`**: Connect to a random exit node in a privacy-friendly country.
-- **`to <country>`**: Connect to a random exit node in a specific country.
-- **`status`**: Display the current exit node, external IP, and connection duration.
-
-### Features
-- **Privacy-Friendly Quick Selection**: Supports random exit nodes from:
-  `Finland`, `Germany`, `Iceland`, `Netherlands`, `Norway`, `Sweden`, `Switzerland`.
-- **Connection Verification**: Ensures exit node and IP via Mullvad API.
-- **Automated Logging**: Tracks all connections, disconnections, and IP changes in `/var/log/vpn_rotation.txt`.
-- **Default Tailscale arguments**:
-  - `--exit-node-allow-lan-access`
-  - `--accept-dns`
-  - `--accept-routes`
-
-### Examples
-```bash
-vpn start         # Connect to a suggested node.
-vpn shh           # Connect to a random privacy-friendly node.
-vpn to Germany    # Connect to a random exit node in Germany.
-vpn status        # Show current connection details.
-vpn stop          # Disconnect from the exit node.
-```
-
-### Notes
-- Requires active Tailscale configuration and internet access.
-- Logging is handled automatically in `/var/log/vpn_rotation.txt`.
-- Use `sudo` for actions requiring elevated permissions (e.g., `crontab`).
-
----
-
-_More scripts will be documented as they're updated. Most scripts include `--help` for basic usage information._
diff --git a/aax2mp3 b/aax2mp3
deleted file mode 100755
index 1b2f97d..0000000
--- a/aax2mp3
+++ /dev/null
@@ -1,39 +0,0 @@
-#!/usr/bin/env python3
-
-import concurrent.futures
-import subprocess
-import glob
-import os
-import multiprocessing
-
-# Different ways to get CPU count
-logical_cores = os.cpu_count()  # All cores including hyperthreading
-physical_cores = multiprocessing.cpu_count()  # Same as above
-# For more detailed info on Apple Silicon:
-try:
-    # This works on macOS to get performance core count
-    p_cores = len([x for x in os.sched_getaffinity(0) if x < os.cpu_count()//2])
-except AttributeError:
-    p_cores = physical_cores
-
-print(f"System has {logical_cores} logical cores")
-max_workers = max(1, logical_cores - 2)  # Leave 2 cores free for system
-
-def convert_file(aax_file):
-    mp3_file = aax_file.replace('.aax', '.mp3')
-    print(f"Converting {aax_file} to {mp3_file}")
-    subprocess.run(['ffmpeg', '-activation_bytes', os.getenv('AUDIBLE_ACTIVATION_BYTES'),
-                   '-i', aax_file, mp3_file], check=True)
-
-aax_files = glob.glob('*.aax')
-if not aax_files:
-    print("No .aax files found in current directory")
-    exit(1)
-
-print(f"Found {len(aax_files)} files to convert")
-print(f"Will convert {max_workers} files simultaneously")
-
-with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
-    list(executor.map(convert_file, aax_files))
-
-
diff --git a/bates b/bates
deleted file mode 100755
index 0ade2f7..0000000
--- a/bates
+++ /dev/null
@@ -1,444 +0,0 @@
-#!/usr/bin/env python3
-
-"""
-Required packages:
-pip3 install pdfplumber pytesseract pdf2image  # pdf2image and pytesseract only needed if using --ocr
-
-System dependencies (only if using --ocr):
-brew install tesseract poppler  # on macOS
-# or
-sudo apt-get install tesseract-ocr poppler-utils  # on Ubuntu/Debian
-"""
-
-import os
-import sys
-import re
-import argparse
-import logging
-from pathlib import Path
-import tempfile
-import subprocess
-import pdfplumber
-
-def check_dependencies(ocr_enabled):
-    try:
-        if ocr_enabled:
-            import pytesseract
-            from pdf2image import convert_from_path
-    except ImportError as e:
-        print(f"Missing dependency: {e}")
-        print("Please install required packages:")
-        if ocr_enabled:
-            print("pip3 install pytesseract pdf2image")
-        sys.exit(1)
-
-
-import os
-import sys
-import re
-import argparse
-import logging
-from pathlib import Path
-import tempfile
-import subprocess
-
-def setup_logging(log_level):
-    """Configure logging with the specified level."""
-    numeric_level = getattr(logging, log_level.upper(), None)
-    if not isinstance(numeric_level, int):
-        raise ValueError(f'Invalid log level: {log_level}')
-    
-    logging.basicConfig(
-        level=numeric_level,
-        format='%(asctime)s - %(levelname)s - %(message)s',
-        datefmt='%Y-%m-%d %H:%M:%S'
-    )
-
-def build_regex_pattern(prefix, num_digits):
-    """Build regex pattern based on prefix and number of digits."""
-    # Escape any special regex characters in the prefix
-    escaped_prefix = re.escape(prefix)
-    # Pattern matches the prefix followed by exactly num_digits digits
-    # and ensures no digits or letters follow
-    pattern = f"{escaped_prefix}\\d{{{num_digits}}}(?![\\d\\w])"
-    logging.debug(f"Generated regex pattern: {pattern}")
-    return pattern
-
-def set_finder_comment(file_path, comment):
-    """Set the Finder comment for a file using osascript."""
-    try:
-        # Escape special characters in both the file path and comment
-        escaped_path = str(file_path).replace('"', '\\"').replace("'", "'\\''")
-        escaped_comment = comment.replace('"', '\\"').replace("'", "'\\''")
-        
-        script = f'''
-        osascript -e 'tell application "Finder"
-            set commentPath to POSIX file "{escaped_path}" as alias
-            set comment of commentPath to "{escaped_comment}"
-        end tell'
-        '''
-        subprocess.run(script, shell=True, check=True, stderr=subprocess.PIPE)
-        logging.debug(f"Set Finder comment for {file_path} to: {comment}")
-        return True
-    except subprocess.CalledProcessError as e:
-        logging.error(f"Failed to set Finder comment for {file_path}: {e.stderr.decode()}")
-        return False
-    except Exception as e:
-        logging.error(f"Failed to set Finder comment for {file_path}: {e}")
-        return False
-
-def rename_with_bates(file_path, name_prefix, first_num, last_num):
-    """Rename file using Bates numbers and preserve original name in metadata."""
-    try:
-        path = Path(file_path)
-        original_name = path.name
-        new_name = f"{name_prefix}{first_num}–{last_num}{path.suffix}"
-        new_path = path.parent / new_name
-        
-        # First try to set the metadata
-        if not set_finder_comment(file_path, original_name):
-            logging.error(f"Skipping rename of {file_path} due to metadata failure")
-            return False
-            
-        # Then rename the file
-        path.rename(new_path)
-        logging.info(f"Renamed {original_name} to {new_name}")
-        return True
-    except Exception as e:
-        logging.error(f"Failed to rename {file_path}: {e}")
-        return False
-
-def ocr_page(pdf_path, page_num):
-    """OCR a specific page of a PDF."""
-    filename = Path(pdf_path).name
-    logging.debug(f"[{filename}] Running OCR on page {page_num}")
-    try:
-        # Import OCR-related modules only when needed
-        import pytesseract
-        from pdf2image import convert_from_path
-        
-        # Convert specific page to image
-        images = convert_from_path(pdf_path, first_page=page_num+1, last_page=page_num+1)
-        if not images:
-            logging.error(f"[{filename}] Failed to convert page {page_num} to image")
-            return ""
-        
-        # OCR the image
-        with tempfile.NamedTemporaryFile(suffix='.png') as tmp:
-            images[0].save(tmp.name, 'PNG')
-            text = pytesseract.image_to_string(tmp.name)
-            logging.debug(f"[{filename}] Page {page_num} OCR result: '{text}'")
-            return text
-    except Exception as e:
-        logging.error(f"[{filename}] OCR failed for page {page_num}: {str(e)}")
-        return ""
-
-def extract_text_from_page_multilayer(page, pdf_path, page_num):
-    """Extract text from different PDF layers."""
-    filename = Path(pdf_path).name
-    # Get page dimensions
-    width = page.width
-    height = page.height
-
-    # Calculate crop box for bottom fifth of page
-    padding = 2
-    y0 = max(0, min(height * 0.8, height - padding))
-    y1 = max(y0 + padding, min(height, height))
-    x0 = padding
-    x1 = max(x0 + padding, min(width - padding, width))
-
-    crop_box = (x0, y0, x1, y1)
-
-    logging.info(f"[{filename}] Page {page_num}: Dimensions {width}x{height}, crop box: ({x0:.2f}, {y0:.2f}, {x1:.2f}, {y1:.2f})")
-
-    texts = []
-
-    # Method 1: Try regular text extraction
-    try:
-        text = page.crop(crop_box).extract_text()
-        if text:
-            logging.info(f"[{filename}] Page {page_num}: Regular extraction found: '{text}'")
-            texts.append(text)
-    except Exception as e:
-        logging.debug(f"[{filename}] Page {page_num}: Regular text extraction failed: {e}")
-
-    # Method 2: Try extracting words individually
-    try:
-        words = page.crop(crop_box).extract_words()
-        if words:
-            text = ' '.join(word['text'] for word in words)
-            logging.info(f"[{filename}] Page {page_num}: Word extraction found: '{text}'")
-            texts.append(text)
-    except Exception as e:
-        logging.debug(f"[{filename}] Page {page_num}: Word extraction failed: {e}")
-
-    # Method 3: Try extracting characters individually
-    try:
-        chars = page.crop(crop_box).chars
-        if chars:
-            text = ''.join(char['text'] for char in chars)
-            logging.info(f"[{filename}] Page {page_num}: Character extraction found: '{text}'")
-            texts.append(text)
-    except Exception as e:
-        logging.debug(f"[{filename}] Page {page_num}: Character extraction failed: {e}")
-
-    # Method 4: Try extracting annotations
-    try:
-        annots = page.annots
-        if annots and isinstance(annots, list):  # Fix for the error
-            for annot in annots:
-                if isinstance(annot, dict) and 'contents' in annot:
-                    text = annot['contents']
-                    if text and not isinstance(text, str):
-                        text = str(text)
-                    if text and text.lower() != 'none':
-                        logging.info(f"[{filename}] Page {page_num}: Annotation found: '{text}'")
-                        texts.append(text)
-    except Exception as e:
-        logging.debug(f"[{filename}] Page {page_num}: Annotation extraction failed: {e}")
-
-    # Method 5: Try extracting text in reverse order
-    try:
-        chars = sorted(page.crop(crop_box).chars, key=lambda x: (-x['top'], x['x0']))
-        if chars:
-            text = ''.join(char['text'] for char in chars)
-            logging.info(f"[{filename}] Page {page_num}: Reverse order extraction found: '{text}'")
-            texts.append(text)
-    except Exception as e:
-        logging.debug(f"[{filename}] Page {page_num}: Reverse order extraction failed: {e}")
-
-    # Method 6: Last resort - flatten and OCR the crop box
-    if not texts:
-        try:
-            logging.info(f"[{filename}] Page {page_num}: Attempting flatten and OCR")
-            # Import needed only if we get this far
-            from pdf2image import convert_from_bytes
-            import pytesseract
-            
-            # Convert just this page to image
-            with tempfile.NamedTemporaryFile(suffix='.pdf') as tmp_pdf:
-                # Save just this page to a temporary PDF
-                writer = pdfplumber.PDF(page.page_obj)
-                writer.save(tmp_pdf.name)
-                
-                # Convert to image
-                images = convert_from_bytes(open(tmp_pdf.name, 'rb').read())
-                if images:
-                    # Crop the image to our area of interest
-                    img = images[0]
-                    img_width, img_height = img.size
-                    crop_box_pixels = (
-                        int(x0 * img_width / width),
-                        int(y0 * img_height / height),
-                        int(x1 * img_width / width),
-                        int(y1 * img_height / height)
-                    )
-                    cropped = img.crop(crop_box_pixels)
-                    
-                    # OCR the cropped area
-                    text = pytesseract.image_to_string(cropped)
-                    if text:
-                        logging.info(f"[{filename}] Page {page_num}: Flatten/OCR found: '{text}'")
-                        texts.append(text)
-        except Exception as e:
-            logging.debug(f"[{filename}] Page {page_num}: Flatten/OCR failed: {e}")
-
-    return texts
-
-
-def find_bates_number(texts, pattern):
-    """Try to find Bates number in multiple text layers."""
-    for text in texts:
-        matches = list(re.finditer(pattern, text))
-        if matches:
-            return matches[-1]  # Return last match if found
-    return None
-
-def extract_bates_numbers(pdf_path, pattern, use_ocr):
-    """Extract Bates numbers from first and last page of PDF using provided pattern."""
-    filename = Path(pdf_path).name
-    logging.info(f"[{filename}] Processing PDF")
-    try:
-        with pdfplumber.open(pdf_path) as pdf:
-            first_page = pdf.pages[0]
-            last_page = pdf.pages[-1]
-
-            # Try all PDF layers first
-            first_texts = extract_text_from_page_multilayer(first_page, pdf_path, 0)
-            last_texts = extract_text_from_page_multilayer(last_page, pdf_path, len(pdf.pages)-1)
-
-            first_match = find_bates_number(first_texts, pattern)
-            last_match = find_bates_number(last_texts, pattern)
-
-            # If no matches found, try flatten and OCR
-            if not first_match or not last_match:
-                logging.info(f"[{filename}] No matches in text layers, attempting flatten/OCR")
-                
-                # For first page
-                if not first_match:
-                    try:
-                        flattened_text = flatten_and_ocr_page(first_page, pdf_path, 0)
-                        if flattened_text:
-                            first_texts.append(flattened_text)
-                            matches = list(re.finditer(pattern, flattened_text))
-                            if matches:
-                                first_match = matches[-1]
-                    except Exception as e:
-                        logging.error(f"[{filename}] Flatten/OCR failed for first page: {e}")
-
-                # For last page
-                if not last_match:
-                    try:
-                        flattened_text = flatten_and_ocr_page(last_page, pdf_path, len(pdf.pages)-1)
-                        if flattened_text:
-                            last_texts.append(flattened_text)
-                            matches = list(re.finditer(pattern, flattened_text))
-                            if matches:
-                                last_match = matches[-1]
-                    except Exception as e:
-                        logging.error(f"[{filename}] Flatten/OCR failed for last page: {e}")
-
-            if first_match and last_match:
-                first_num = ''.join(filter(str.isdigit, first_match.group(0)))
-                last_num = ''.join(filter(str.isdigit, last_match.group(0)))
-
-                logging.info(f"[{filename}] Found numbers: {first_num}–{last_num}")
-                return (first_num, last_num)
-            else:
-                logging.warning(f"[{filename}] No matching numbers found")
-                return None
-    except Exception as e:
-        logging.error(f"[{filename}] Error processing PDF: {str(e)}")
-        return None
-
-def flatten_and_ocr_page(page, pdf_path, page_num):
-    """Flatten page and OCR the crop box area."""
-    filename = Path(pdf_path).name
-    logging.info(f"[{filename}] Page {page_num}: Attempting flatten and OCR")
-    
-    try:
-        # Import needed only if we get this far
-        from pdf2image import convert_from_path
-        import pytesseract
-        import PyPDF2
-        
-        # Get page dimensions
-        width = page.width
-        height = page.height
-        
-        # Calculate crop box for bottom fifth
-        padding = 2
-        y0 = max(0, min(height * 0.8, height - padding))
-        y1 = max(y0 + padding, min(height, height))
-        x0 = padding
-        x1 = max(x0 + padding, min(width - padding, width))
-        
-        # Create a single-page PDF with just this page
-        with tempfile.NamedTemporaryFile(suffix='.pdf', delete=False) as tmp_pdf:
-            pdf_writer = PyPDF2.PdfWriter()
-            with open(pdf_path, 'rb') as pdf_file:
-                pdf_reader = PyPDF2.PdfReader(pdf_file)
-                pdf_writer.add_page(pdf_reader.pages[page_num])
-                pdf_writer.write(tmp_pdf)
-                tmp_pdf.flush()
-            
-            # Convert to image
-            images = convert_from_path(tmp_pdf.name)
-            if images:
-                # Crop the image to our area of interest
-                img = images[0]
-                img_width, img_height = img.size
-                crop_box_pixels = (
-                    int(x0 * img_width / width),
-                    int(y0 * img_height / height),
-                    int(x1 * img_width / width),
-                    int(y1 * img_height / height)
-                )
-                cropped = img.crop(crop_box_pixels)
-                
-                # OCR the cropped area
-                text = pytesseract.image_to_string(cropped)
-                if text:
-                    logging.info(f"[{filename}] Page {page_num}: Flatten/OCR found: '{text}'")
-                    return text
-        
-        # Clean up the temporary file
-        os.unlink(tmp_pdf.name)
-        
-    except Exception as e:
-        logging.error(f"[{filename}] Page {page_num}: Flatten/OCR failed: {e}")
-        return None
-
-def process_folder(folder_path, pattern, use_ocr, dry_run=False, name_prefix=None):
-    """Process all PDFs in the specified folder."""
-    folder = Path(folder_path)
-    if not folder.exists():
-        logging.error(f"Folder does not exist: {folder_path}")
-        return
-    
-    logging.info(f"Processing folder: {folder_path}")
-    
-    pdf_count = 0
-    success_count = 0
-    rename_count = 0
-    
-    # Use simple case-insensitive matching
-    pdf_files = [f for f in folder.iterdir() if f.is_file() and f.suffix.lower() == '.pdf']
-    
-    for pdf_file in pdf_files:
-        pdf_count += 1
-        numbers = extract_bates_numbers(pdf_file, pattern, use_ocr)
-        if numbers:
-            success_count += 1
-            if dry_run:
-                print(f"{pdf_file.name}: {numbers[0]}–{numbers[1]}")
-            elif name_prefix is not None:
-                if rename_with_bates(pdf_file, name_prefix, numbers[0], numbers[1]):
-                    rename_count += 1
-    
-    logging.info(f"Processed {pdf_count} PDFs, successfully extracted {success_count} number pairs")
-    if not dry_run and name_prefix is not None:
-        logging.info(f"Renamed {rename_count} files")
-
-def main():
-    parser = argparse.ArgumentParser(description='Extract Bates numbers from PDFs')
-    parser.add_argument('folder', help='Path to folder containing PDFs')
-    parser.add_argument('--log', default='INFO',
-                        choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'],
-                        help='Set the logging level')
-    parser.add_argument('--width-start', type=float, default=0.67,
-                        help='Relative x-coordinate to start crop (0-1)')
-    parser.add_argument('--height-start', type=float, default=0.83,
-                        help='Relative y-coordinate to start crop (0-1)')
-    parser.add_argument('--prefix', type=str, default='FWS-',
-                        help='Prefix pattern to search for (default: "FWS-")')
-    parser.add_argument('--digits', type=int, default=6,
-                        help='Number of digits to match after prefix (default: 6)')
-    parser.add_argument('--ocr', action='store_true',
-                        help='Enable OCR for pages with little or no text (disabled by default)')
-    parser.add_argument('--dry-run', action='store_true',
-                        help='Only print matches without renaming files')
-    parser.add_argument('--name-prefix', type=str,
-                        help='Prefix to use when renaming files (e.g., "FWS ")')
-    
-    args = parser.parse_args()
-    
-    setup_logging(args.log)
-    
-    # Check dependencies based on whether OCR is enabled
-    check_dependencies(args.ocr)
-    
-    # Display the pattern we're looking for
-    display_pattern = f"{args.prefix}{'#' * args.digits}"
-    print(f"Looking for pattern: {display_pattern}")
-    
-    if not args.dry_run and args.name_prefix is None:
-        logging.error("Must specify --name-prefix when not in dry-run mode")
-        sys.exit(1)
-    
-    pattern = build_regex_pattern(args.prefix, args.digits)
-    process_folder(args.folder, pattern, args.ocr, args.dry_run, args.name_prefix)
-
-if __name__ == '__main__':
-    main()
-
diff --git a/camel b/camel
deleted file mode 100755
index d14d3db..0000000
--- a/camel
+++ /dev/null
@@ -1,103 +0,0 @@
-#!/usr/bin/env python3
-import re
-import os
-import nltk
-from nltk.corpus import words
-from nltk.corpus import wordnet
-
-try:
-    word_list = words.words()
-    nltk.download('wordnet')
-except LookupError:
-    nltk.download('words')
-    nltk.download('wordnet')
-    word_list = words.words()
-
-word_set = set(word.lower() for word in word_list)
-common_words = ['and', 'in', 'the', 'of', 'to', 'at', 'by', 'for', 'with', 'from', 'on']
-always_valid = {'the', 'a', 'an', 'and', 'or', 'but', 'nor', 'for', 'yet', 'so', 'on'}
-
-def is_word(word):
-    if word.lower() in always_valid:
-        print(f"  Checking if '{word}' is in dictionary: True (common word)")
-        return True
-    
-    in_words = word.lower() in word_set
-    in_wordnet = bool(wordnet.synsets(word))
-    result = in_words and in_wordnet
-    print(f"  Checking if '{word}' is in dictionary: {result} (words:{in_words}, wordnet:{in_wordnet})")
-    return result
-
-def process_word(word):
-    print(f"\nProcessing word: '{word}'")
-    
-    if is_word(word):
-        print(f"  '{word}' is in dictionary, returning as-is")
-        return word
-        
-    print(f"  '{word}' not in dictionary, checking for common words at end...")
-    for common in common_words:
-        if word.lower().endswith(common):
-            print(f"  Found '{common}' at end of '{word}'")
-            remainder = word[:-len(common)]
-            common_case = word[-len(common):]
-            print(f"  Recursively processing remainder: '{remainder}'")
-            return f"{process_word(remainder)} {common_case}"
-    
-    print(f"  No common words found at end of '{word}'")
-    
-    match = re.search(r'([a-zA-Z]+)(\d+)$', word)
-    if match:
-        text, num = match.groups()
-        print(f"  Found number at end: '{text}' + '{num}'")
-        if is_word(text):
-            return f"{text} {num}"
-            
-    print(f"  Returning '{word}' unchanged")
-    return word
-
-def split_filename(filename):
-    print(f"\nProcessing filename: {filename}")
-    base = os.path.splitext(filename)[0]
-    ext = os.path.splitext(filename)[1]
-    
-    print(f"Splitting on delimiters...")
-    parts = re.split('([_\-\s])', base)
-    
-    result = []
-    for part in parts:
-        if part in '_-':
-            result.append(' ')
-        else:
-            print(f"\nSplitting on capitals: {part}")
-            words = re.split('(?<!^)(?=[A-Z])', part)
-            print(f"Got words: {words}")
-            processed = [process_word(word) for word in words]
-            result.append(' '.join(processed))
-    
-    final = ' '.join(''.join(result).split())
-    return final + ext
-
-def main():
-    # Get all files in current directory
-    files = [f for f in os.listdir('.') if os.path.isfile(f)]
-    
-    for filename in files:
-        if filename.startswith('.'):  # Skip hidden files
-            continue
-            
-        print(f"\n{'='*50}")
-        print(f"Original: {filename}")
-        new_name = split_filename(filename)
-        print(f"New name: {new_name}")
-        
-        if new_name != filename:
-            try:
-                os.rename(filename, new_name)
-                print(f"Renamed: {filename} -> {new_name}")
-            except OSError as e:
-                print(f"Error renaming {filename}: {e}")
-
-if __name__ == "__main__":
-    main()
-
diff --git a/cf b/cf
deleted file mode 100755
index e47b26c..0000000
--- a/cf
+++ /dev/null
@@ -1,113 +0,0 @@
-#!/bin/bash
-if [ "$EUID" -ne 0 ]; then
-  echo "This script must be run as root. Try using 'sudo'."
-  exit 1
-fi
-source /home/sij/.zshrc
-source /home/sij/.GLOBAL_VARS
-ddns
-
-# Initialize variables
-full_domain=$1
-shift # Shift the arguments to left so we can get remaining arguments as before
-caddyIP="" # Optional IP for Caddyfile
-port=""
-
-# Fixed IP for Cloudflare from ip.txt
-cloudflareIP=$(cat /home/sij/.services/ip.txt)
-api_key=$CF_API_KEY
-cf_domains_file=/home/sij/.services/cf_domains.json
-
-# Usage message
-usage() {
-    echo "Usage: $0 <full-domain> [--ip <ip address>] --port <port>"
-    echo "Note: <full-domain> is required and can be a subdomain or a full domain."
-    exit 1
-}
-
-# Parse command-line arguments
-while [[ $# -gt 0 ]]; do
-    case $1 in
-        --ip|-i)
-            caddyIP="$2"
-            shift 2
-            ;;
-        --port|-p)
-            port="$2"
-            shift 2
-            ;;
-        *)
-            usage
-            ;;
-    esac
-done
-
-# Check required parameter
-if [[ -z "$full_domain" ]] || [[ -z "$port" ]]; then
-    usage
-fi
-
-# Extract subdomain and domain
-subdomain=$(echo "$full_domain" | awk -F"." '{print $1}')
-remaining_parts=$(echo "$full_domain" | awk -F"." '{print NF}')
-if [ "$remaining_parts" -eq 2 ]; then
-  # Handle root domain (e.g., env.esq)
-  domain=$full_domain
-  subdomain="@"  # Use "@" for root domain
-else
-  # Handle subdomain (e.g., sub.env.esq)
-  domain=$(echo "$full_domain" | sed "s/^$subdomain\.//")
-fi
-
-# Default to localhost for Caddyfile if IP is not provided via --ip
-if [[ -z "$caddyIP" ]]; then
-    caddyIP="localhost"
-fi
-
-# Extract zone_id from JSON file
-zone_id=$(jq -r ".\"$domain\".zone_id" "$cf_domains_file")
-
-# Check if zone_id was successfully retrieved
-if [ "$zone_id" == "null" ] || [ -z "$zone_id" ]; then
-    echo "Error: Zone ID for $domain could not be found."
-    exit 1
-fi
-
-# API call setup for Cloudflare A record using the fixed IP from ip.txt
-endpoint="https://api.cloudflare.com/client/v4/zones/$zone_id/dns_records"
-data="{\"type\":\"A\",\"name\":\"$subdomain\",\"content\":\"$cloudflareIP\",\"ttl\":120,\"proxied\":true}"
-
-# Make API call
-response=$(curl -s -X POST "$endpoint" -H "Authorization: Bearer $api_key" -H "Content-Type: application/json" --data "$data")
-
-# Parse response
-record_id=$(echo "$response" | jq -r '.result.id')
-success=$(echo "$response" | jq -r '.success')
-error_message=$(echo "$response" | jq -r '.errors[0].message')
-error_code=$(echo "$response" | jq -r '.errors[0].code')
-
-# Function to update Caddyfile with correct indentation
-update_caddyfile() {
-    echo "$full_domain {
-    reverse_proxy $caddyIP:$port
-    tls {
-        dns cloudflare {env.CLOUDFLARE_API_TOKEN}
-    }
-}" >> /etc/caddy/Caddyfile
-    echo "Configuration appended to /etc/caddy/Caddyfile with correct formatting."
-}
-
-# Check for success or specific error to update Caddyfile
-if [ "$success" == "true" ]; then
-    jq ".\"$domain\".subdomains[\"$full_domain\"] = \"$record_id\"" "$cf_domains_file" > temp.json && mv temp.json "$cf_domains_file"
-    echo "A record created and cf_domains.json updated successfully."
-    update_caddyfile
-elif [ "$error_message" == "Record already exists." ]; then
-    echo "Record already exists. Updating Caddyfile anyway."
-    update_caddyfile
-else
-    echo "Failed to create A record. Error: $error_message (Code: $error_code)"
-fi
-
-echo "Restarting caddy!"
-sudo systemctl restart caddy
diff --git a/checknode b/checknode
deleted file mode 100755
index 327a6e6..0000000
--- a/checknode
+++ /dev/null
@@ -1,55 +0,0 @@
-#!/bin/bash
-
-echo "Checking for remnants of Node.js, npm, and nvm..."
-
-# Check PATH
-echo "Checking PATH..."
-echo $PATH | grep -q 'node\|npm' && echo "Found Node.js or npm in PATH"
-
-# Check Homebrew
-echo "Checking Homebrew..."
-brew list | grep -q 'node\|npm' && echo "Found Node.js or npm installed with Homebrew"
-
-# Check Yarn
-echo "Checking Yarn..."
-command -v yarn >/dev/null 2>&1 && echo "Found Yarn"
-
-# Check Node.js and npm directories
-echo "Checking Node.js and npm directories..."
-ls ~/.npm >/dev/null 2>&1 && echo "Found ~/.npm directory"
-ls ~/.node-gyp >/dev/null 2>&1 && echo "Found ~/.node-gyp directory"
-
-# Check open files and sockets
-echo "Checking open files and sockets..."
-lsof | grep -q 'node' && echo "Found open files or sockets related to Node.js"
-
-# Check other version managers
-echo "Checking other version managers..."
-command -v n >/dev/null 2>&1 && echo "Found 'n' version manager"
-
-# Check temporary directories
-echo "Checking temporary directories..."
-ls /tmp | grep -q 'node\|npm' && echo "Found Node.js or npm related files in /tmp"
-
-# Check Browserify and Webpack cache
-echo "Checking Browserify and Webpack cache..."
-ls ~/.config/browserify >/dev/null 2>&1 && echo "Found Browserify cache"
-ls ~/.config/webpack >/dev/null 2>&1 && echo "Found Webpack cache"
-
-# Check Electron cache
-echo "Checking Electron cache..."
-ls ~/.electron >/dev/null 2>&1 && echo "Found Electron cache"
-
-# Check logs
-echo "Checking logs..."
-ls ~/.npm/_logs >/dev/null 2>&1 && echo "Found npm logs"
-ls ~/.node-gyp/*.log >/dev/null 2>&1 && echo "Found Node.js logs"
-
-# Check miscellaneous directories
-echo "Checking miscellaneous directories..."
-ls ~/.node_repl_history >/dev/null 2>&1 && echo "Found ~/.node_repl_history"
-ls ~/.v8flags* >/dev/null 2>&1 && echo "Found ~/.v8flags*"
-ls ~/.npm-global >/dev/null 2>&1 && echo "Found ~/.npm-global"
-ls ~/.nvm-global >/dev/null 2>&1 && echo "Found ~/.nvm-global"
-
-echo "Check completed."
diff --git a/comfy b/comfy
deleted file mode 100755
index 3093b61..0000000
--- a/comfy
+++ /dev/null
@@ -1,26 +0,0 @@
-#!/bin/bash
-
-# Create a new tmux session named "comfy" detached (-d) and start the first command in the left pane
-tmux new-session -d -s comfy -n comfypane
-
-# Split the window into two panes. By default, this creates a vertical split.
-tmux split-window -h -t comfy
-
-# Select the first pane to setup comfy environment
-tmux select-pane -t 0
-COMFY_MAMBA=$(mamba env list | grep "^comfy" | awk '{print $2}')
-tmux send-keys -t 0 "cd ~/workshop/sd/ComfyUI" C-m
-tmux send-keys -t 0 "export PATH=\"$COMFY_MAMBA/bin:\$PATH\"" C-m
-tmux send-keys -t 0 "source ~/.zshrc" C-m
-tmux send-keys -t 0 "mamba activate comfy; sleep 1; while true; do PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 PYTORCH_ENABLE_MPS_FALLBACK=1 python main.py --preview-method auto --force-fp16 --enable-cors-header; exit_status=\$?; if [ \$exit_status -ne 0 ]; then osascript -e 'display notification \"ComfyUI script exited unexpectedly\" with title \"Error in ComfyUI\"'; fi; sleep 1; done" C-m
-
-# Select the second pane to setup extracomfy environment
-tmux select-pane -t 1
-IG_MAMBA=$(mamba env list | grep "^insta" | awk '{print $2}')
-tmux send-keys -t 1 "export PATH=\"$IG_MAMBA/bin:\$PATH\"" C-m
-tmux send-keys -t 1 "source ~/.zshrc" C-m
-tmux send-keys -t 1 "mamba activate instabot; cd workshop/igbot" C-m
-
-# Attach to the tmux session
-# tmux attach -t comfy
-
diff --git a/ddns b/ddns
deleted file mode 100755
index 6a78dff..0000000
--- a/ddns
+++ /dev/null
@@ -1,55 +0,0 @@
-#!/bin/bash
-source /home/sij/.GLOBAL_VARS
-
-service="https://am.i.mullvad.net/ip"
-# Obtain the current public IP address
-#current_ip=$(ssh -n sij@10.13.37.10 curl -s $service)
-current_ip=$(curl -s $service)
-last_ip=$(cat /home/sij/.services/ip.txt)
-api_token=$CF_API_KEY
-
-# Path to the JSON file with zone IDs, subdomains, and DNS IDs mappings
-json_file="/home/sij/.services/cf_domains.json"
-
-force_update=false
-
-# Parse command line arguments for --force flag
-while [[ "$#" -gt 0 ]]; do
-    case $1 in
-        -f|--force) force_update=true ;;
-        *) echo "Unknown parameter passed: $1"; exit 1 ;;
-    esac
-    shift
-done
-
-# Temporary file to store update results
-temp_file=$(mktemp)
-
-# Function to update DNS records
-update_dns_record() {
-    zone_id=$1
-    subdomain=$2
-    dns_id=$3
-    update_result=$(curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/$zone_id/dns_records/$dns_id" \
-         -H "Authorization: Bearer $api_token" \
-         -H "Content-Type: application/json" \
-         --data "{\"type\":\"A\",\"name\":\"$subdomain\",\"content\":\"$current_ip\",\"ttl\":120,\"proxied\":true}")
-    echo "$update_result" >> "$temp_file"
-}
-
-# Check if IP has changed or --force flag is used
-if [ "$current_ip" != "$last_ip" ] || [ "$force_update" = true ]; then
-    echo $current_ip > /home/sij/.services/ip.txt
-    # Iterate through each domain in the JSON
-    /home/sij/miniforge3/bin/jq -r '.[] | .zone_id as $zone_id | .subdomains | to_entries[] | [$zone_id, .key, .value] | @tsv' $json_file |
-    while IFS=$'\t' read -r zone_id subdomain dns_id; do
-        update_dns_record "$zone_id" "$subdomain" "$dns_id"
-    done
-    # Combine all update results into a single JSON array
-    /home/sij/miniforge3/bin/jq -s '.' "$temp_file"
-    # Remove the temporary file
-    rm "$temp_file"
-else
-    echo "IP address has not changed from ${last_ip}. No action taken."
-fi
-
diff --git a/delpycache b/delpycache
deleted file mode 100755
index 5ec9a3a..0000000
--- a/delpycache
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash
-
-# Default directories to search
-directories=("~/sync" "~/workshop")
-
-# Check if a command line argument is provided
-if [ $# -gt 0 ]; then
-  if [ "$1" == "." ]; then
-    # Use the current directory
-    directories=(".")
-  else
-    # Use the provided directory
-    directories=("$1")
-  fi
-fi
-
-# Iterate through each directory
-for dir in "${directories[@]}"; do
-  # Expand tilde to home directory
-  expanded_dir=$(eval echo "$dir")
-
-  # Find and delete __pycache__ directories
-  find "$expanded_dir" -type d -name "__pycache__" -exec rm -rf {} +
-done
-
-echo "Deletion of __pycache__ folders completed."
-
diff --git a/deps b/deps
deleted file mode 100755
index fa32adc..0000000
--- a/deps
+++ /dev/null
@@ -1,429 +0,0 @@
-#!/usr/bin/env python3
-
-import argparse
-import os
-import re
-import subprocess
-import sys
-import urllib.request
-import urllib.error
-
-############################
-# Built-in, Known Corrections, Exclusions
-############################
-
-BUILTIN_MODULES = {
-    'abc', 'aifc', 'argparse', 'array', 'ast', 'asynchat', 'asyncio', 'asyncore', 'atexit',
-    'audioop', 'base64', 'bdb', 'binascii', 'binhex', 'bisect', 'builtins', 'bz2', 'calendar',
-    'cgi', 'cgitb', 'chunk', 'cmath', 'cmd', 'code', 'codecs', 'codeop', 'collections', 'colorsys',
-    'compileall', 'concurrent', 'configparser', 'contextlib', 'copy', 'copyreg', 'crypt', 'csv',
-    'ctypes', 'curses', 'dataclasses', 'datetime', 'dbm', 'decimal', 'difflib', 'dis', 'distutils',
-    'doctest', 'dummy_threading', 'email', 'encodings', 'ensurepip', 'enum', 'errno', 'faulthandler',
-    'fcntl', 'filecmp', 'fileinput', 'fnmatch', 'formatter', 'fractions', 'ftplib', 'functools',
-    'gc', 'getopt', 'getpass', 'gettext', 'glob', 'gzip', 'hashlib', 'heapq', 'hmac', 'html', 'http',
-    'imaplib', 'imghdr', 'imp', 'importlib', 'inspect', 'io', 'ipaddress', 'itertools', 'json',
-    'keyword', 'lib2to3', 'linecache', 'locale', 'logging', 'lzma', 'mailbox', 'mailcap', 'marshal',
-    'math', 'mimetypes', 'modulefinder', 'multiprocessing', 'netrc', 'nntplib', 'numbers', 'operator',
-    'optparse', 'os', 'ossaudiodev', 'parser', 'pathlib', 'pdb', 'pickle', 'pickletools', 'pipes',
-    'pkgutil', 'platform', 'plistlib', 'poplib', 'posix', 'pprint', 'profile', 'pstats', 'pty',
-    'pwd', 'py_compile', 'pyclbr', 'pydoc', 'queue', 'quopri', 'random', 're', 'readline',
-    'reprlib', 'resource', 'rlcompleter', 'runpy', 'sched', 'secrets', 'select', 'selectors', 'shelve',
-    'shlex', 'shutil', 'signal', 'site', 'smtpd', 'smtplib', 'sndhdr', 'socket', 'socketserver',
-    'spwd', 'sqlite3', 'ssl', 'stat', 'statistics', 'string', 'stringprep', 'struct', 'subprocess',
-    'sunau', 'symtable', 'sys', 'sysconfig', 'syslog', 'tabnanny', 'tarfile', 'telnetlib', 'tempfile',
-    'termios', 'test', 'textwrap', 'threading', 'time', 'timeit', 'token', 'tokenize', 'trace',
-    'traceback', 'tracemalloc', 'tty', 'turtle', 'types', 'typing', 'unicodedata', 'unittest',
-    'urllib', 'uu', 'uuid', 'venv', 'warnings', 'wave', 'weakref', 'webbrowser', 'xdrlib', 'xml',
-    'xmlrpc', 'zipapp', 'zipfile', 'zipimport', 'zlib'
-}
-
-KNOWN_CORRECTIONS = {
-    'dateutil': 'python-dateutil',
-    'dotenv': 'python-dotenv',
-    'docx': 'python-docx',
-    'tesseract': 'pytesseract',
-    'magic': 'python-magic',
-    'multipart': 'python-multipart',
-    'newspaper': 'newspaper3k',
-    'srtm': 'elevation',
-    'yaml': 'pyyaml',
-    'zoneinfo': 'backports.zoneinfo'
-}
-
-EXCLUDED_NAMES = {'models', 'data', 'convert', 'example', 'tests'}
-
-############################
-# Environment & Installation
-############################
-
-def run_command(command):
-    process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-    stdout, stderr = process.communicate()
-    return process.returncode, stdout.decode(), stderr.decode()
-
-def which(cmd):
-    """
-    Check if `cmd` is on PATH. Returns True if found, else False.
-    """
-    for pth in os.environ["PATH"].split(os.pathsep):
-        cmd_path = os.path.join(pth, cmd)
-        if os.path.isfile(cmd_path) and os.access(cmd_path, os.X_OK):
-            return True
-    return False
-
-def in_conda_env():
-    """
-    Returns True if we appear to be in a conda environment,
-    typically indicated by CONDA_DEFAULT_ENV or other variables.
-    """
-    return "CONDA_DEFAULT_ENV" in os.environ
-
-# We'll detect once at runtime (if in a conda env and skip_conda=False):
-# we either pick 'mamba' if available, else 'conda' if available, else None
-PREFERRED_CONDA_TOOL = None
-
-def detect_conda_tool(skip_conda=False):
-    """
-    Decide which tool to use for conda-based installation:
-    1) If skip_conda is True or not in a conda env -> return None
-    2) If mamba is installed, return 'mamba'
-    3) Else if conda is installed, return 'conda'
-    4) Else return None
-    """
-    if skip_conda or not in_conda_env():
-        return None
-    if which("mamba"):
-        return "mamba"
-    elif which("conda"):
-        return "conda"
-    return None
-
-def is_package_installed(package, skip_conda=False):
-    """
-    Checks if 'package' is installed with the chosen conda tool or pip.
-    """
-    conda_tool = detect_conda_tool(skip_conda)
-    if conda_tool == "mamba":
-        returncode, stdout, _ = run_command(["mamba", "list"])
-        if returncode == 0:
-            pattern = rf"^{re.escape(package)}\s"
-            if re.search(pattern, stdout, re.MULTILINE):
-                return True
-    elif conda_tool == "conda":
-        returncode, stdout, _ = run_command(["conda", "list"])
-        if returncode == 0:
-            pattern = rf"^{re.escape(package)}\s"
-            if re.search(pattern, stdout, re.MULTILINE):
-                return True
-
-    # Fall back to pip
-    returncode, stdout, _ = run_command(["pip", "list"])
-    pattern = rf"^{re.escape(package)}\s"
-    return re.search(pattern, stdout, re.MULTILINE) is not None
-
-def install_package(package, skip_conda=False):
-    """
-    Installs 'package'.
-      1) Decide once if we can use 'mamba' or 'conda' (if skip_conda=False and in conda env).
-      2) Try that conda tool for installation
-      3) If that fails or not found, fallback to pip
-    """
-    if is_package_installed(package, skip_conda=skip_conda):
-        print(f"Package '{package}' is already installed.")
-        return
-
-    conda_tool = detect_conda_tool(skip_conda)
-
-    if conda_tool == "mamba":
-        print(f"Installing '{package}' with mamba...")
-        returncode, _, _ = run_command(["mamba", "install", "-y", "-c", "conda-forge", package])
-        if returncode == 0:
-            print(f"Successfully installed '{package}' via mamba.")
-            return
-        print(f"mamba failed for '{package}'. Falling back to pip...")
-
-    elif conda_tool == "conda":
-        print(f"Installing '{package}' with conda...")
-        returncode, _, _ = run_command(["conda", "install", "-y", "-c", "conda-forge", package])
-        if returncode == 0:
-            print(f"Successfully installed '{package}' via conda.")
-            return
-        print(f"conda failed for '{package}'. Falling back to pip...")
-
-    # fallback: pip
-    print(f"Installing '{package}' with pip...")
-    returncode, _, _ = run_command(["pip", "install", package])
-    if returncode != 0:
-        print(f"Failed to install package '{package}'.")
-    else:
-        print(f"Successfully installed '{package}' via pip.")
-
-############################
-# Parsing Python Imports
-############################
-
-def process_requirements_file(file_path):
-    packages = set()
-    with open(file_path, 'r') as file:
-        for line in file:
-            line = line.strip()
-            if line and not line.startswith('#'):
-                packages.add(line)
-    return packages
-
-def process_python_file(file_path):
-    """
-    Return a set of external imports (not built-in or excluded).
-    Applies known corrections to recognized package names.
-    """
-    imports = set()
-    with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
-        content = f.read()
-
-    for line in content.split('\n'):
-        line = line.strip()
-        if line.startswith(('import ', 'from ')) and not line.startswith('#'):
-            if line.startswith('import '):
-                modules = line.replace('import ', '').split(',')
-                for mod in modules:
-                    mod = re.sub(r'\s+as\s+\w+', '', mod).split('.')[0].strip()
-                    if mod and not mod.isupper() and mod not in EXCLUDED_NAMES and mod not in BUILTIN_MODULES:
-                        imports.add(KNOWN_CORRECTIONS.get(mod, mod))
-            elif line.startswith('from '):
-                mod = line.split(' ')[1].split('.')[0].strip()
-                if mod and not mod.isupper() and mod not in EXCLUDED_NAMES and mod not in BUILTIN_MODULES:
-                    imports.add(KNOWN_CORRECTIONS.get(mod, mod))
-
-    return imports
-
-def find_imports_in_path(path, recurse=False):
-    """
-    Finds Python imports in the specified path. If path is a file, parse that file;
-    if path is a dir, parse .py files in that dir. Recurse subdirs if 'recurse=True'.
-    """
-    imports = set()
-    if not os.path.exists(path):
-        print(f"Warning: Path does not exist: {path}")
-        return imports
-
-    if os.path.isfile(path):
-        if path.endswith('.py'):
-            imports.update(process_python_file(path))
-        else:
-            print(f"Skipping non-Python file: {path}")
-        return imports
-
-    # Directory:
-    if recurse:
-        for root, _, filenames in os.walk(path):
-            for fn in filenames:
-                if fn.endswith('.py'):
-                    imports.update(process_python_file(os.path.join(root, fn)))
-    else:
-        for fn in os.listdir(path):
-            fullpath = os.path.join(path, fn)
-            if os.path.isfile(fullpath) and fn.endswith('.py'):
-                imports.update(process_python_file(fullpath))
-
-    return imports
-
-############################
-# PyPI Availability Check
-############################
-
-def check_library_on_pypi(library):
-    """
-    Returns True if 'library' is on PyPI, else False.
-    Using urllib to avoid external dependencies.
-    """
-    url = f"https://pypi.org/pypi/{library}/json"
-    try:
-        with urllib.request.urlopen(url, timeout=5) as resp:
-            return (resp.status == 200)  # 200 => available
-    except (urllib.error.URLError, urllib.error.HTTPError, ValueError):
-        return False
-
-############################
-# Writing to requirements/missing
-############################
-
-def append_to_file(line, filename):
-    """
-    Append 'line' to 'filename' only if it's not already in there.
-    """
-    if not os.path.isfile(filename):
-        with open(filename, 'w') as f:
-            f.write(line + '\n')
-        return
-
-    with open(filename, 'r') as f:
-        lines = {l.strip() for l in f.readlines() if l.strip()}
-    if line not in lines:
-        with open(filename, 'a') as f:
-            f.write(line + '\n')
-
-############################
-# Subcommand: ls
-############################
-
-def subcmd_ls(parsed_args):
-    """
-    Gathers imports, displays them, then writes them to requirements.txt or missing-packages.txt
-    just like the original import_finder script did.
-    """
-    path = parsed_args.path
-    recurse = parsed_args.recurse
-
-    imports = find_imports_in_path(path, recurse=recurse)
-    if not imports:
-        print("No Python imports found (or none that require external packages).")
-        return
-
-    print("Imports found:")
-    for imp in sorted(imports):
-        print(f" - {imp}")
-
-    # Now we replicate the logic of import_finder:
-    # If on PyPI => requirements.txt; else => missing-packages.txt
-    for lib in sorted(imports):
-        if check_library_on_pypi(lib):
-            append_to_file(lib, 'requirements.txt')
-        else:
-            append_to_file(lib, 'missing-packages.txt')
-
-    print("\nWrote results to requirements.txt (PyPI-available) and missing-packages.txt (unavailable).")
-
-############################
-# Subcommand: install
-############################
-
-def subcmd_install(parsed_args):
-    """
-    If the user typed no direct packages/scripts or only used '-r' for recursion with no other args,
-    we gather imports from the current dir, check PyPI availability, and install them with conda/mamba/pip
-    (unless --no-conda is given).
-    
-    Otherwise, if the user typed e.g. '-R <reqfile>', or a .py file, or direct package names, we handle them.
-    """
-    skip_conda = parsed_args.no_conda
-    is_recursive = parsed_args.recurse
-
-    # If user typed no leftover arguments or only the recursion flag, we do the "auto-scan & install" mode
-    if not parsed_args.packages:
-        # Means: "deps install" or "deps install -r"
-        imports_found = find_imports_in_path('.', recurse=is_recursive)
-        if not imports_found:
-            print("No imports found in current directory.")
-            return
-        # Filter out those that are on PyPI
-        to_install = []
-        for lib in sorted(imports_found):
-            if check_library_on_pypi(lib):
-                to_install.append(lib)
-            else:
-                print(f"Skipping '{lib}' (not found on PyPI).")
-        if not to_install:
-            print("No PyPI-available packages found to install.")
-            return
-        print("Installing packages:", ', '.join(to_install))
-        for pkg in to_install:
-            install_package(pkg, skip_conda=skip_conda)
-        return
-
-    # Otherwise, we have leftover items: direct packages, .py files, or "-R" requirements.
-    leftover_args = parsed_args.packages
-    packages_to_install = set()
-
-    i = 0
-    while i < len(leftover_args):
-        arg = leftover_args[i]
-        if arg == '-R':
-            # next arg is a requirements file
-            if i + 1 < len(leftover_args):
-                req_file = leftover_args[i + 1]
-                if os.path.isfile(req_file):
-                    pkgs = process_requirements_file(req_file)
-                    packages_to_install.update(pkgs)
-                else:
-                    print(f"Requirements file not found: {req_file}")
-                i += 2
-            else:
-                print("Error: -R requires a file path.")
-                return
-        elif arg.endswith('.py'):
-            # parse imports from that script
-            if os.path.isfile(arg):
-                pkgs = process_python_file(arg)
-                packages_to_install.update(pkgs)
-            else:
-                print(f"File not found: {arg}")
-            i += 1
-        else:
-            # treat as a direct package name
-            packages_to_install.add(arg)
-            i += 1
-
-    # Now install them
-    for pkg in sorted(packages_to_install):
-        install_package(pkg, skip_conda=skip_conda)
-
-############################
-# Main
-############################
-
-def main():
-    parser = argparse.ArgumentParser(description='deps - Manage and inspect Python dependencies.')
-    subparsers = parser.add_subparsers(dest='subcommand', required=True)
-
-    # Subcommand: ls
-    ls_parser = subparsers.add_parser(
-        'ls',
-        help="List imports in a file/folder (and write them to requirements.txt/missing-packages.txt)."
-    )
-    ls_parser.add_argument(
-        '-r', '--recurse',
-        action='store_true',
-        help='Recurse into subfolders.'
-    )
-    ls_parser.add_argument(
-        'path',
-        nargs='?',
-        default='.',
-        help='File or directory to scan (default is current directory).'
-    )
-    ls_parser.set_defaults(func=subcmd_ls)
-
-    # Subcommand: install
-    install_parser = subparsers.add_parser(
-        'install',
-        help="Install packages or dependencies from .py files / current folder / subfolders."
-    )
-    install_parser.add_argument(
-        '-r', '--recurse',
-        action='store_true',
-        help="If no packages are specified, scanning current dir for imports will be recursive."
-    )
-    install_parser.add_argument(
-        '--no-conda',
-        action='store_true',
-        help="Skip using mamba/conda entirely and install only with pip."
-    )
-    install_parser.add_argument(
-        'packages',
-        nargs='*',
-        help=(
-            "Direct package names, .py files, or '-R <reqfile>'. If empty, scans current dir; "
-            "if combined with -r, scans recursively. Example usage:\n"
-            "  deps install requests flask\n"
-            "  deps install script.py\n"
-            "  deps install -R requirements.txt\n"
-            "  deps install -r   (recursively scan current dir)\n"
-        )
-    )
-    install_parser.set_defaults(func=subcmd_install)
-
-    parsed_args = parser.parse_args()
-    parsed_args.func(parsed_args)
-
-if __name__ == "__main__":
-    main()
diff --git a/emoji_flag b/emoji_flag
deleted file mode 100755
index eba028f..0000000
--- a/emoji_flag
+++ /dev/null
@@ -1,14 +0,0 @@
-#!/usr/bin/env python3
-import sys
-
-def flag_emoji(country_code):
-    offset = 127397
-    flag = ''.join(chr(ord(char) + offset) for char in country_code.upper())
-    return flag
-
-if __name__ == "__main__":
-    if len(sys.argv) > 1:
-        country_code = sys.argv[1]
-        print(flag_emoji(country_code))
-    else:
-        print("No country code provided")
\ No newline at end of file
diff --git a/get b/get
deleted file mode 100755
index f7ab37e..0000000
--- a/get
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/bin/bash
-
-# Check if a URL is provided
-if [ $# -eq 0 ]; then
-    echo "Please provide a git repository URL."
-    exit 1
-fi
-
-# Extract the repository URL and name
-repo_url=$1
-repo_name=$(basename "$repo_url" .git)
-
-# Clone the repository
-git clone "$repo_url"
-
-# Check if the clone was successful
-if [ $? -ne 0 ]; then
-    echo "Failed to clone the repository."
-    exit 1
-fi
-
-# Change to the newly created directory
-cd "$repo_name" || exit
-
-# Check for setup.py or requirements.txt
-if [ -f "setup.py" ] || [ -f "requirements.txt" ]; then
-    # Create a new Mamba environment
-    mamba create -n "$repo_name" python -y
-
-    # Activate the new environment
-    eval "$(conda shell.bash hook)"
-    mamba activate "$repo_name"
-
-    # Install dependencies
-    if [ -f "setup.py" ]; then
-        echo "Installing from setup.py..."
-        python setup.py install
-    fi
-
-    if [ -f "requirements.txt" ]; then
-        echo "Installing from requirements.txt..."
-        pip install -r requirements.txt
-    fi
-
-    echo "Environment setup complete."
-else
-    echo "No setup.py or requirements.txt found. Skipping environment setup."
-fi
-
-echo "Repository cloned and set up successfully."
-
diff --git a/gitpurge b/gitpurge
deleted file mode 100755
index 5831a9d..0000000
--- a/gitpurge
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/bash
-
-# Ensure we're in a git repository
-if ! git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
-    echo "Error: This script must be run inside a Git repository."
-    exit 1
-fi
-
-# Check if git-filter-repo is installed
-if ! command -v git-filter-repo &> /dev/null; then
-    echo "Error: git-filter-repo is not installed. Please install it first."
-    echo "You can install it via pip: pip install git-filter-repo"
-    exit 1
-fi
-
-# Get a list of files that currently exist in the repository
-current_files=$(git ls-files)
-
-# Create a file with the list of files to keep
-echo "$current_files" > files_to_keep.txt
-
-# Use git-filter-repo to keep only the files that currently exist
-git filter-repo --paths-from-file files_to_keep.txt --force
-
-# Remove the temporary file
-rm files_to_keep.txt
-
-# Force push all branches
-git push origin --all --force
-
-echo "Purge complete. All files not present in the local repo have been removed from all commits on all branches."
-echo "The changes have been force-pushed to the remote repository."
-
diff --git a/gitscan b/gitscan
deleted file mode 100755
index c887990..0000000
--- a/gitscan
+++ /dev/null
@@ -1,22 +0,0 @@
-#!/bin/bash
-
-output_file="./repos.txt"
-
-# Clear the existing file or create it if it doesn't exist
-> "$output_file"
-
-# Find all .git directories in the current folder and subfolders, 
-# excluding hidden directories and suppressing permission denied errors
-find . -type d -name ".git" -not -path "*/.*/*" 2>/dev/null | while read -r gitdir; do
-    # Get the parent directory of the .git folder
-    repo_path=$(dirname "$gitdir")
-    echo "$repo_path" >> "$output_file"
-done
-
-echo "Git repositories have been written to $output_file"
-
-# Remove duplicate entries
-sort -u "$output_file" -o "$output_file"
-
-echo "Duplicate entries removed. Final list:"
-cat "$output_file"
\ No newline at end of file
diff --git a/import_finder b/import_finder
deleted file mode 100755
index 41cbf6b..0000000
--- a/import_finder
+++ /dev/null
@@ -1,144 +0,0 @@
-#!/usr/bin/env python3
-
-import os
-import re
-import requests
-import time
-import pkg_resources
-
-# List of Python built-in modules
-BUILTIN_MODULES = {
-    'abc', 'aifc', 'argparse', 'array', 'ast', 'asynchat', 'asyncio', 'asyncore', 'atexit',
-    'audioop', 'base64', 'bdb', 'binascii', 'binhex', 'bisect', 'builtins', 'bz2', 'calendar',
-    'cgi', 'cgitb', 'chunk', 'cmath', 'cmd', 'code', 'codecs', 'codeop', 'collections', 'colorsys',
-    'compileall', 'concurrent', 'configparser', 'contextlib', 'copy', 'copyreg', 'crypt', 'csv',
-    'ctypes', 'curses', 'dataclasses', 'datetime', 'dbm', 'decimal', 'difflib', 'dis', 'distutils',
-    'doctest', 'dummy_threading', 'email', 'encodings', 'ensurepip', 'enum', 'errno', 'faulthandler',
-    'fcntl', 'filecmp', 'fileinput', 'fnmatch', 'formatter', 'fractions', 'ftplib', 'functools', 
-    'gc', 'getopt', 'getpass', 'gettext', 'glob', 'gzip', 'hashlib', 'heapq', 'hmac', 'html', 'http',
-    'imaplib', 'imghdr', 'imp', 'importlib', 'inspect', 'io', 'ipaddress', 'itertools', 'json', 
-    'keyword', 'lib2to3', 'linecache', 'locale', 'logging', 'lzma', 'mailbox', 'mailcap', 'marshal', 
-    'math', 'mimetypes', 'modulefinder', 'multiprocessing', 'netrc', 'nntplib', 'numbers', 'operator',
-    'optparse', 'os', 'ossaudiodev', 'parser', 'pathlib', 'pdb', 'pickle', 'pickletools', 'pipes', 
-    'pkgutil', 'platform', 'plistlib', 'poplib', 'posix', 'pprint', 'profile', 'pstats', 'pty', 
-    'pwd', 'py_compile', 'pyclbr', 'pydoc', 'queue', 'quopri', 'random', 're', 'readline', 
-    'reprlib', 'resource', 'rlcompleter', 'runpy', 'sched', 'secrets', 'select', 'selectors', 'shelve',
-    'shlex', 'shutil', 'signal', 'site', 'smtpd', 'smtplib', 'sndhdr', 'socket', 'socketserver', 
-    'spwd', 'sqlite3', 'ssl', 'stat', 'statistics', 'string', 'stringprep', 'struct', 'subprocess',
-    'sunau', 'symtable', 'sys', 'sysconfig', 'syslog', 'tabnanny', 'tarfile', 'telnetlib', 'tempfile',
-    'termios', 'test', 'textwrap', 'threading', 'time', 'timeit', 'token', 'tokenize', 'trace', 
-    'traceback', 'tracemalloc', 'tty', 'turtle', 'types', 'typing', 'unicodedata', 'unittest', 
-    'urllib', 'uu', 'uuid', 'venv', 'warnings', 'wave', 'weakref', 'webbrowser', 'xdrlib', 'xml', 
-    'xmlrpc', 'zipapp', 'zipfile', 'zipimport', 'zlib'
-}
-
-# Known corrections for PyPI package names
-KNOWN_CORRECTIONS = {
-    'dateutil': 'python-dateutil',
-    'dotenv': 'python-dotenv',
-    'docx': 'python-docx',
-    'tesseract': 'pytesseract',
-    'magic': 'python-magic',
-    'multipart': 'python-multipart',
-    'newspaper': 'newspaper3k',
-    'srtm': 'elevation',
-    'yaml': 'pyyaml',
-    'zoneinfo': 'backports.zoneinfo'
-}
-
-# List of generic names to exclude
-EXCLUDED_NAMES = {'models', 'data', 'convert', 'example', 'tests'}
-
-def find_imports(root_dir):
-    imports_by_file = {}
-    for dirpath, _, filenames in os.walk(root_dir):
-        for filename in filenames:
-            if filename.endswith('.py'):
-                filepath = os.path.join(dirpath, filename)
-                with open(filepath, 'r') as file:
-                    import_lines = []
-                    for line in file:
-                        line = line.strip()
-                        if line.startswith(('import ', 'from ')) and not line.startswith('#'):
-                            import_lines.append(line)
-                    imports_by_file[filepath] = import_lines
-    return imports_by_file
-
-def process_import_lines(import_lines):
-    processed_lines = set()  # Use a set to remove duplicates
-    for line in import_lines:
-        # Handle 'import xyz' and 'import abc, def, geh'
-        if line.startswith('import '):
-            modules = line.replace('import ', '').split(',')
-            for mod in modules:
-                mod = re.sub(r'\s+as\s+\w+', '', mod).split('.')[0].strip()
-                if mod and not mod.isupper() and mod not in EXCLUDED_NAMES:
-                    processed_lines.add(mod)
-        # Handle 'from abc import def, geh'
-        elif line.startswith('from '):
-            mod = line.split(' ')[1].split('.')[0].strip()
-            if mod and not mod.isupper() and mod not in EXCLUDED_NAMES:
-                processed_lines.add(mod)
-    return processed_lines
-
-def check_pypi_availability(libraries):
-    available = set()
-    unavailable = set()
-    for lib in libraries:
-        if lib in BUILTIN_MODULES:  # Skip built-in modules
-            continue
-        corrected_lib = KNOWN_CORRECTIONS.get(lib, lib)
-        try:
-            if check_library_on_pypi(corrected_lib):
-                available.add(corrected_lib)
-            else:
-                unavailable.add(corrected_lib)
-        except requests.exceptions.RequestException:
-            print(f"Warning: Unable to check {corrected_lib} on PyPI due to network error.")
-            unavailable.add(corrected_lib)
-    return available, unavailable
-
-def check_library_on_pypi(library):
-    max_retries = 3
-    for attempt in range(max_retries):
-        try:
-            response = requests.get(f"https://pypi.org/pypi/{library}/json", timeout=5)
-            return response.status_code == 200
-        except requests.exceptions.RequestException:
-            if attempt < max_retries - 1:
-                time.sleep(1)  # Wait for 1 second before retrying
-            else:
-                raise
-
-def save_to_requirements_file(available, output_file='requirements.txt'):
-    existing_requirements = set()
-    if os.path.isfile(output_file):
-        with open(output_file, 'r') as file:
-            existing_requirements = set(line.strip() for line in file)
-    with open(output_file, 'a') as file:
-        for pkg in sorted(available - existing_requirements):
-            print(f"Adding to requirements.txt: {pkg}")
-            file.write(pkg + '\n')
-
-def save_to_missing_file(unavailable, output_file='missing-packages.txt'):
-    existing_missing = set()
-    if os.path.isfile(output_file):
-        with open(output_file, 'r') as file:
-            existing_missing = set(line.strip() for line in file)
-    with open(output_file, 'a') as file:
-        for pkg in sorted(unavailable - existing_missing):
-            print(f"Adding to missing-packages.txt: {pkg}")
-            file.write(pkg + '\n')
-
-if __name__ == "__main__":
-    root_dir = os.getcwd()  # Get the current working directory
-
-    imports_by_file = find_imports(root_dir)
-    for filepath, import_lines in imports_by_file.items():
-        print(f"# Processing {filepath}")
-        processed_lines = process_import_lines(import_lines)
-        available, unavailable = check_pypi_availability(processed_lines)
-        save_to_requirements_file(available)
-        save_to_missing_file(unavailable)
-
-    print(f"Processed import statements have been saved to requirements.txt and missing-packages.txt")
diff --git a/ip b/ip
deleted file mode 100755
index 831d81d..0000000
--- a/ip
+++ /dev/null
@@ -1,2 +0,0 @@
-#!/bin/bash
-whois $(curl -s https://am.i.mullvad.net/ip)
diff --git a/kip b/kip
deleted file mode 100755
index 5946d92..0000000
--- a/kip
+++ /dev/null
@@ -1,144 +0,0 @@
-#!/usr/bin/env python3
-
-import os
-import sys
-import re
-import subprocess
-import requests
-import time
-from typing import Set, Tuple, List, Dict
-
-# List of Python built-in modules
-BUILTIN_MODULES = {
-    'abc', 'aifc', 'argparse', 'array', 'ast', 'asynchat', 'asyncio', 'asyncore', 'atexit',
-    'audioop', 'base64', 'bdb', 'binascii', 'binhex', 'bisect', 'builtins', 'bz2', 'calendar',
-    'cgi', 'cgitb', 'chunk', 'cmath', 'cmd', 'code', 'codecs', 'codeop', 'collections', 'colorsys',
-    'compileall', 'concurrent', 'configparser', 'contextlib', 'copy', 'copyreg', 'crypt', 'csv',
-    'ctypes', 'curses', 'dataclasses', 'datetime', 'dbm', 'decimal', 'difflib', 'dis', 'distutils',
-    'doctest', 'dummy_threading', 'email', 'encodings', 'ensurepip', 'enum', 'errno', 'faulthandler',
-    'fcntl', 'filecmp', 'fileinput', 'fnmatch', 'formatter', 'fractions', 'ftplib', 'functools',
-    'gc', 'getopt', 'getpass', 'gettext', 'glob', 'gzip', 'hashlib', 'heapq', 'hmac', 'html', 'http',
-    'imaplib', 'imghdr', 'imp', 'importlib', 'inspect', 'io', 'ipaddress', 'itertools', 'json',
-    'keyword', 'lib2to3', 'linecache', 'locale', 'logging', 'lzma', 'mailbox', 'mailcap', 'marshal',
-    'math', 'mimetypes', 'modulefinder', 'multiprocessing', 'netrc', 'nntplib', 'numbers', 'operator',
-    'optparse', 'os', 'ossaudiodev', 'parser', 'pathlib', 'pdb', 'pickle', 'pickletools', 'pipes',
-    'pkgutil', 'platform', 'plistlib', 'poplib', 'posix', 'pprint', 'profile', 'pstats', 'pty',
-    'pwd', 'py_compile', 'pyclbr', 'pydoc', 'queue', 'quopri', 'random', 're', 'readline',
-    'reprlib', 'resource', 'rlcompleter', 'runpy', 'sched', 'secrets', 'select', 'selectors', 'shelve',
-    'shlex', 'shutil', 'signal', 'site', 'smtpd', 'smtplib', 'sndhdr', 'socket', 'socketserver',
-    'spwd', 'sqlite3', 'ssl', 'stat', 'statistics', 'string', 'stringprep', 'struct', 'subprocess',
-    'sunau', 'symtable', 'sys', 'sysconfig', 'syslog', 'tabnanny', 'tarfile', 'telnetlib', 'tempfile',
-    'termios', 'test', 'textwrap', 'threading', 'time', 'timeit', 'token', 'tokenize', 'trace',
-    'traceback', 'tracemalloc', 'tty', 'turtle', 'types', 'typing', 'unicodedata', 'unittest',
-    'urllib', 'uu', 'uuid', 'venv', 'warnings', 'wave', 'weakref', 'webbrowser', 'xdrlib', 'xml',
-    'xmlrpc', 'zipapp', 'zipfile', 'zipimport', 'zlib'
-}
-
-# Known corrections for PyPI package names
-KNOWN_CORRECTIONS = {
-    'dateutil': 'python-dateutil',
-    'dotenv': 'python-dotenv',
-    'docx': 'python-docx',
-    'tesseract': 'pytesseract',
-    'magic': 'python-magic',
-    'multipart': 'python-multipart',
-    'newspaper': 'newspaper3k',
-    'srtm': 'elevation',
-    'yaml': 'pyyaml',
-    'zoneinfo': 'backports.zoneinfo'
-}
-
-# List of generic names to exclude
-EXCLUDED_NAMES = {'models', 'data', 'convert', 'example', 'tests'}
-
-def run_command(command: List[str]) -> Tuple[int, str, str]:
-    process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-    stdout, stderr = process.communicate()
-    return process.returncode, stdout.decode(), stderr.decode()
-
-def is_package_installed(package: str) -> bool:
-    returncode, stdout, _ = run_command(["mamba", "list"])
-    if returncode == 0:
-        if re.search(f"^{package}\\s", stdout, re.MULTILINE):
-            return True
-    returncode, stdout, _ = run_command(["pip", "list"])
-    return re.search(f"^{package}\\s", stdout, re.MULTILINE) is not None
-
-def install_package(package: str):
-    if is_package_installed(package):
-        print(f"Package '{package}' is already installed.")
-        return
-
-    print(f"Installing package '{package}'.")
-    returncode, _, _ = run_command(["mamba", "install", "-y", "-c", "conda-forge", package])
-    if returncode != 0:
-        returncode, _, _ = run_command(["pip", "install", package])
-    
-    if returncode != 0:
-        print(f"Failed to install package '{package}'.")
-    else:
-        print(f"Successfully installed package '{package}'.")
-
-def process_python_file(file_path: str) -> Set[str]:
-    with open(file_path, 'r') as file:
-        content = file.read()
-
-    imports = set()
-    for line in content.split('\n'):
-        line = line.strip()
-        if line.startswith(('import ', 'from ')) and not line.startswith('#'):
-            if line.startswith('import '):
-                modules = line.replace('import ', '').split(',')
-                for mod in modules:
-                    mod = re.sub(r'\s+as\s+\w+', '', mod).split('.')[0].strip()
-                    if mod and not mod.isupper() and mod not in EXCLUDED_NAMES and mod not in BUILTIN_MODULES:
-                        imports.add(KNOWN_CORRECTIONS.get(mod, mod))
-            elif line.startswith('from '):
-                mod = line.split(' ')[1].split('.')[0].strip()
-                if mod and not mod.isupper() and mod not in EXCLUDED_NAMES and mod not in BUILTIN_MODULES:
-                    imports.add(KNOWN_CORRECTIONS.get(mod, mod))
-
-    return imports
-
-def process_requirements_file(file_path: str) -> Set[str]:
-    with open(file_path, 'r') as file:
-        return {line.strip() for line in file if line.strip() and not line.startswith('#')}
-
-def main():
-    if len(sys.argv) < 2:
-        print("Usage: kip <package1> [<package2> ...] or kip <script.py> or kip -r <requirements.txt>")
-        sys.exit(1)
-
-    packages_to_install = set()
-
-    i = 1
-    while i < len(sys.argv):
-        arg = sys.argv[i]
-        if arg == '-r':
-            if i + 1 < len(sys.argv):
-                requirements_file = sys.argv[i + 1]
-                if os.path.isfile(requirements_file):
-                    packages_to_install.update(process_requirements_file(requirements_file))
-                else:
-                    print(f"Requirements file {requirements_file} not found.")
-                    sys.exit(1)
-                i += 2
-            else:
-                print("Error: -r flag requires a file path.")
-                sys.exit(1)
-        elif arg.endswith('.py'):
-            if os.path.isfile(arg):
-                packages_to_install.update(process_python_file(arg))
-            else:
-                print(f"File {arg} not found.")
-                sys.exit(1)
-            i += 1
-        else:
-            packages_to_install.add(arg)
-            i += 1
-
-    for package in packages_to_install:
-        install_package(package)
-
-if __name__ == "__main__":
-    main()
diff --git a/linecount b/linecount
deleted file mode 100755
index a689595..0000000
--- a/linecount
+++ /dev/null
@@ -1,70 +0,0 @@
-#!/usr/bin/env python3
-
-import os
-import sys
-
-def is_binary(file_path):
-    """
-    Determines if a file is binary by checking its content.
-    Returns True for binary files, False for text files.
-    """
-    try:
-        with open(file_path, 'rb') as f:
-            # Read the first 1024 bytes to check if it's binary
-            chunk = f.read(1024)
-            if b'\0' in chunk:
-                return True
-            return False
-    except Exception as e:
-        print(f"Error reading file {file_path}: {e}")
-        return True  # Treat unreadable files as binary for safety.
-
-def count_lines_in_file(file_path):
-    """
-    Counts the number of lines in a given text file.
-    """
-    try:
-        with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
-            return sum(1 for _ in f)
-    except Exception as e:
-        print(f"Error counting lines in file {file_path}: {e}")
-        return 0
-
-def count_lines_in_directory(directory, extensions=None):
-    """
-    Recursively counts lines in all text files (optionally filtered by extensions) within the directory.
-    """
-    total_lines = 0
-    total_files = 0
-    for root, _, files in os.walk(directory):
-        for file_name in files:
-            file_path = os.path.join(root, file_name)
-
-            # Skip binary files
-            if is_binary(file_path):
-                continue
-
-            # Check for extensions if provided
-            if extensions and not file_name.lower().endswith(tuple(extensions)):
-                continue
-
-            # Count lines in the valid file
-            lines = count_lines_in_file(file_path)
-            total_lines += lines
-            total_files += 1
-
-    return total_files, total_lines
-
-if __name__ == "__main__":
-    # Get extensions from command-line arguments
-    extensions = [ext.lower() for ext in sys.argv[1:]] if len(sys.argv) > 1 else None
-
-    # Get the current working directory
-    current_dir = os.getcwd()
-    print(f"Scanning directory: {current_dir}")
-    if extensions:
-        print(f"Filtering by extensions: {', '.join(extensions)}")
-    total_files, total_lines = count_lines_in_directory(current_dir, extensions)
-    print(f"Total matching files: {total_files}")
-    print(f"Total lines across matching files: {total_lines}")
-
diff --git a/lsd b/lsd
deleted file mode 100755
index 963f539..0000000
--- a/lsd
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/bin/bash
-
-# Default options for lsd
-default_options="--color=always -F --long --size=short --permission=octal --group-dirs=first -X"
-
-# Check if the first argument is a directory or an option
-if [[ $# -gt 0 && ! $1 =~ ^- ]]; then
-  # First argument is a directory, store it and remove from arguments list
-  directory=$1
-  shift
-else
-  # No directory specified, default to the current directory
-  directory="."
-fi
-
-# Execute lsd with the default options, directory, and any additional arguments provided
-/opt/homebrew/bin/lsd $default_options "$directory" "$@"
-
-
diff --git a/mamba_exporter b/mamba_exporter
deleted file mode 100755
index 176e713..0000000
--- a/mamba_exporter
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/bin/bash
-
-# List all conda environments and cut the output to get just the names
-envs=$(mamba env list | awk '{print $1}' | grep -v '^#' | grep -v 'base')
-
-# Loop through each environment name
-for env in $envs; do
-    # Use conda (or mamba, but conda is preferred for compatibility reasons) to export the environment to a YAML file
-    # No need to activate the environment; conda can export directly by specifying the name
-    echo "Exporting $env..."
-    mamba env export --name $env > "${env}.yml"
-done
-
-echo "All environments have been exported."
-
diff --git a/mamba_importer b/mamba_importer
deleted file mode 100755
index 79886e1..0000000
--- a/mamba_importer
+++ /dev/null
@@ -1,26 +0,0 @@
-#!/bin/bash
-
-# Function to process a single .yml file
-process_file() {
-    file="$1"
-    if [[ -f "$file" ]]; then
-        env_name=$(echo "$file" | sed 's/.yml$//')
-        echo "Creating environment from $file..."
-        conda env create -f "$file" || echo "Failed to create environment from $file"
-    else
-        echo "File $file does not exist."
-    fi
-}
-
-# Check if a .yml file was provided as an argument
-if [[ $# -eq 1 && $1 == *.yml ]]; then
-    # Process the provided .yml file
-    process_file "$1"
-else
-    # No argument provided, process all .yml files in the current directory
-    for file in *.yml; do
-        process_file "$file"
-    done
-    echo "Environment creation process completed."
-fi
-
diff --git a/murder b/murder
deleted file mode 100755
index 3c76f1b..0000000
--- a/murder
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/bin/bash
-
-# Check if an argument is given
-if [ $# -eq 0 ]; then
-    echo "Usage: murder [process name or port]"
-    exit 1
-fi
-
-# Get the input parameter
-ARGUMENT=$1
-
-# Check if the argument is numeric
-if [[ $ARGUMENT =~ ^[0-9]+$ ]]; then
-    echo "Killing processes listening on port $ARGUMENT"
-    lsof -t -i:$ARGUMENT | xargs kill
-else
-    # Process name was given instead of a port number
-    echo "Killing processes with name $ARGUMENT"
-    for PID in $(ps aux | grep $ARGUMENT | grep -v grep | awk '{print $2}'); do
-        echo "Killing process $PID"
-        sudo kill -9 $PID
-    done
-fi
-
diff --git a/n3k b/n3k
deleted file mode 100755
index bb1785b..0000000
--- a/n3k
+++ /dev/null
@@ -1,101 +0,0 @@
-#!/usr/bin/env python3
-
-import sys
-import asyncio
-import trafilatura
-from newspaper import Article
-from urllib.parse import urlparse
-from datetime import datetime
-import math
-from typing import Optional
-import textwrap
-
-async def fetch_and_parse_article(url: str):
-    # Try trafilatura first
-    source = trafilatura.fetch_url(url)
-    
-    if source:
-        try:
-            traf = trafilatura.extract_metadata(filecontent=source, default_url=url)
-            
-            article = Article(url)
-            article.set_html(source)
-            article.parse()
-            
-            # Update article properties with trafilatura data
-            article.title = article.title or traf.title or url
-            article.authors = article.authors or (traf.author if isinstance(traf.author, list) else [traf.author])
-            article.publish_date = traf.date or datetime.now()
-            article.text = trafilatura.extract(source, output_format="markdown", include_comments=False) or article.text
-            article.top_image = article.top_image or traf.image
-            article.source_url = traf.sitename or urlparse(url).netloc.replace('www.', '').title()
-            
-            return article
-        except Exception:
-            pass
-    
-    # Fallback to newspaper3k
-    try:
-        headers = {
-            'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36',
-            'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
-        }
-        
-        article = Article(url)
-        article.config.browser_user_agent = headers['User-Agent']
-        article.config.headers = headers
-        article.download()
-        article.parse()
-        
-        article.source_url = urlparse(url).netloc.replace('www.', '').title()
-        return article
-    
-    except Exception as e:
-        raise Exception(f"Failed to parse article from {url}: {str(e)}")
-
-def format_article_markdown(article) -> str:
-    # Format title
-    output = f"# {article.title}\n\n"
-    
-    # Format metadata
-    if article.authors:
-        authors = article.authors if isinstance(article.authors, list) else [article.authors]
-        output += f"*By {', '.join(filter(None, authors))}*\n\n"
-    
-    if article.publish_date:
-        date_str = article.publish_date.strftime("%Y-%m-%d") if isinstance(article.publish_date, datetime) else str(article.publish_date)
-        output += f"*Published: {date_str}*\n\n"
-    
-    if article.top_image:
-        output += f"![Article Image]({article.top_image})\n\n"
-    
-    # Format article text with proper wrapping
-    if article.text:
-        paragraphs = article.text.split('\n')
-        wrapped_paragraphs = []
-        
-        for paragraph in paragraphs:
-            if paragraph.strip():
-                wrapped = textwrap.fill(paragraph.strip(), width=80)
-                wrapped_paragraphs.append(wrapped)
-        
-        output += '\n\n'.join(wrapped_paragraphs)
-    
-    return output
-
-async def main():
-    if len(sys.argv) != 2:
-        print("Usage: ./n3k <article_url>")
-        sys.exit(1)
-    
-    url = sys.argv[1]
-    try:
-        article = await fetch_and_parse_article(url)
-        formatted_content = format_article_markdown(article)
-        print(formatted_content)
-    except Exception as e:
-        print(f"Error processing article: {str(e)}")
-        sys.exit(1)
-
-if __name__ == "__main__":
-    asyncio.run(main())
diff --git a/nocomment b/nocomment
deleted file mode 100755
index 2cb655e..0000000
--- a/nocomment
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env python3
-
-import sys
-import os
-
-def print_significant_lines(file_path):
-    try:
-        with open(file_path, 'r') as file:
-            for line in file:
-                # Strip whitespace from the beginning and end of the line
-                stripped_line = line.strip()
-                
-                # Check if the line is not empty, not whitespace, and not a comment
-                if stripped_line and not stripped_line.startswith('#'):
-                    print(line.rstrip())  # Print the line without trailing newline
-    except FileNotFoundError:
-        print(f"Error: File '{file_path}' not found.", file=sys.stderr)
-    except IOError:
-        print(f"Error: Unable to read file '{file_path}'.", file=sys.stderr)
-
-if __name__ == "__main__":
-    if len(sys.argv) != 2:
-        print("Usage: nocomment <file_path>", file=sys.stderr)
-        sys.exit(1)
-
-    file_path = sys.argv[1]
-    print_significant_lines(file_path)
diff --git a/noon b/noon
deleted file mode 100755
index 1c2bb0c..0000000
--- a/noon
+++ /dev/null
@@ -1,116 +0,0 @@
-#!/Users/sij/miniforge3/bin/python
-
-import os
-import sys
-import argparse
-
-def ask_for_confirmation(message):
-    while True:
-        user_input = input(message + " (y/n): ").strip().lower()
-        if user_input in ('y', 'n'):
-            return user_input == 'y'
-        else:
-            print("Invalid input. Please enter 'y' or 'n'.")
-
-def rename_files_orig(root_dir, manual):
-    for dirpath, _, filenames in os.walk(root_dir, followlinks=False):
-        for filename in filenames:
-            if '(orig)' in filename:
-                orig_filepath = os.path.join(dirpath, filename)
-                base_filename, ext = os.path.splitext(filename)
-                new_filename = base_filename.replace('(orig)', '')
-                new_filepath = os.path.join(dirpath, new_filename + ext)
-
-                if os.path.exists(new_filepath):
-                    new_file_new_name = new_filename + '(new)' + ext
-                    new_file_new_path = os.path.join(dirpath, new_file_new_name)
-
-                    if manual:
-                        if not ask_for_confirmation(f"Do you want to rename {new_filepath} to {new_file_new_path}?"):
-                            continue
-
-                    if os.path.exists(new_file_new_path):
-                        print(f"Error: Cannot rename {new_filepath} to {new_file_new_path} because the target file already exists.")
-                        continue
-
-                    os.rename(new_filepath, new_file_new_path)
-                    print(f'Renamed: {new_filepath} -> {new_file_new_name}')
-                else:
-                    print(f"No associated file found for: {orig_filepath}")
-
-                orig_file_new_name = new_filename + ext
-                orig_file_new_path = os.path.join(dirpath, orig_file_new_name)
-
-                if manual:
-                    if not ask_for_confirmation(f"Do you want to rename {orig_filepath} to {orig_file_new_path}?"):
-                        continue
-
-                if os.path.exists(orig_file_new_path):
-                    print(f"Error: Cannot rename {orig_filepath} to {orig_file_new_path} because the target file already exists.")
-                    continue
-
-                os.rename(orig_filepath, orig_file_new_path)
-                print(f'Renamed: {orig_filepath} -> {orig_file_new_name}')
-
-def rename_files_new(root_dir, manual):
-    for dirpath, _, filenames in os.walk(root_dir, followlinks=False):
-        for filename in filenames:
-            if '(new)' in filename:
-                new_filepath = os.path.join(dirpath, filename)
-                base_filename, ext = os.path.splitext(filename)
-                orig_filename = base_filename.replace('(new)', '')
-                orig_filepath = os.path.join(dirpath, orig_filename + ext)
-
-                if os.path.exists(orig_filepath):
-                    orig_file_orig_name = orig_filename + '(orig)' + ext
-                    orig_file_orig_path = os.path.join(dirpath, orig_file_orig_name)
-
-                    if manual:
-                        if not ask_for_confirmation(f"Do you want to rename {orig_filepath} to {orig_file_orig_path}?"):
-                            continue
-
-                    if os.path.exists(orig_file_orig_path):
-                        print(f"Error: Cannot rename {orig_filepath} to {orig_file_orig_path} because the target file already exists.")
-                        continue
-
-                    os.rename(orig_filepath, orig_file_orig_path)
-                    print(f'Renamed: {orig_filepath} -> {orig_file_orig_name}')
-                else:
-                    print(f"No associated file found for: {new_filepath}")
-
-                new_file_new_name = orig_filename + ext
-                new_file_new_path = os.path.join(dirpath, new_file_new_name)
-
-                if manual:
-                    if not ask_for_confirmation(f"Do you want to rename {new_filepath} to {new_file_new_path}?"):
-                        continue
-
-                if os.path.exists(new_file_new_path):
-                    print(f"Error: Cannot rename {new_filepath} to {new_file_new_path} because the target file already exists.")
-                    continue
-
-                os.rename(new_filepath, new_file_new_path)
-                print(f'Renamed: {new_filepath} -> {new_file_new_name}')
-
-if __name__ == "__main__":
-    parser = argparse.ArgumentParser(description='Rename files based on given criteria.')
-    parser.add_argument('-o', '--orig', action='store_true', help='Rename files ending with (orig)')
-    parser.add_argument('-n', '--new', action='store_true', help='Rename files ending with (new)')
-    parser.add_argument('-m', '--manual', action='store_true', help='Manual mode: ask for confirmation before each renaming')
-    parser.add_argument('directory', nargs='?', default=os.getcwd(), help='Directory to start the search (default: current directory)')
-    args = parser.parse_args()
-
-    if args.orig and args.new:
-        print("Error: Please specify either -o or -n, not both.")
-        sys.exit(1)
-
-    if args.orig:
-        print("Running in ORIG mode")
-        rename_files_orig(args.directory, args.manual)
-    elif args.new:
-        print("Running in NEW mode")
-        rename_files_new(args.directory, args.manual)
-    else:
-        print("Error: Please specify either -o or -n.")
-        sys.exit(1)
-
diff --git a/nv b/nv
deleted file mode 100755
index f0775e9..0000000
--- a/nv
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/bin/bash
-
-SESSION_NAME="$1"
-PYTHON_VERSION_INPUT="${2:-3.10}"  # Default to 3.8 if not specified
-
-# Normalize the Python version input
-if [[ "$PYTHON_VERSION_INPUT" =~ ^python[0-9.]+$ ]]; then
-    PYTHON_VERSION="${PYTHON_VERSION_INPUT//python/}"
-elif [[ "$PYTHON_VERSION_INPUT" =~ ^py[0-9.]+$ ]]; then
-    PYTHON_VERSION="${PYTHON_VERSION_INPUT//py/}"
-else
-    PYTHON_VERSION="$PYTHON_VERSION_INPUT"
-fi
-
-# Format for Conda
-MAMBA_PYTHON_VERSION="python=$PYTHON_VERSION"
-
-# Check if Conda environment exists
-if ! mamba env list | grep -q "^$SESSION_NAME\s"; then
-    echo "Creating new Mamba environment: $SESSION_NAME with $MAMBA_PYTHON_VERSION"
-    mamba create --name "$SESSION_NAME" "$MAMBA_PYTHON_VERSION" --yes
-fi
-
-# Find Conda env directory
-CONDA_ENV_DIR=$(mamba  env list | grep "^$SESSION_NAME" | awk '{print $2}')
-
-# Handle tmux session
-if ! tmux has-session -t "$SESSION_NAME" 2>/dev/null; then
-    echo "Creating new tmux session: $SESSION_NAME"
-    tmux new-session -d -s "$SESSION_NAME"
-    sleep 2
-fi
-
-# Attach to tmux session and update PATH before activating Conda environment
-sleep 1
-tmux send-keys -t "$SESSION_NAME" "export PATH=\"$MAMBA_ENV_DIR/bin:\$PATH\"" C-m
-tmux send-keys -t "$SESSION_NAME" "source ~/.zshrc" C-m
-tmux send-keys -t "$SESSION_NAME" "mamba activate $SESSION_NAME" C-m
-tmux attach -t "$SESSION_NAME"
-
diff --git a/ocr b/ocr
deleted file mode 100755
index 937c042..0000000
--- a/ocr
+++ /dev/null
@@ -1,104 +0,0 @@
-#!/usr/bin/env python3
-
-import sys
-import os
-from pathlib import Path
-from pdf2image import convert_from_path  # This is the correct import
-import easyocr
-from PyPDF2 import PdfReader, PdfWriter
-import concurrent.futures
-import argparse
-from tqdm import tqdm
-import logging
-
-def setup_logging():
-    logging.basicConfig(
-        level=logging.INFO,
-        format='%(asctime)s - %(levelname)s - %(message)s',
-        handlers=[
-            logging.StreamHandler(),
-            logging.FileHandler('ocr_process.log')
-        ]
-    )
-
-def extract_images_from_pdf_chunk(pdf_path, start_page, num_pages):
-    try:
-        return convert_from_path(pdf_path,  # This is the correct function name
-                               first_page=start_page, 
-                               last_page=start_page + num_pages - 1,
-                               dpi=300)
-    except Exception as e:
-        logging.error(f"Error extracting pages {start_page}-{start_page+num_pages}: {e}")
-        raise
-
-def process_page(image):
-    reader = easyocr.Reader(['en'], gpu=True)
-    return reader.readtext(image)
-
-def process_chunk(pdf_path, start_page, num_pages):
-    images = extract_images_from_pdf_chunk(pdf_path, start_page, num_pages)
-    results = []
-    with concurrent.futures.ThreadPoolExecutor() as executor:
-        futures = [executor.submit(process_page, image) for image in images]
-        for future in concurrent.futures.as_completed(futures):
-            try:
-                results.append(future.result())
-            except Exception as e:
-                logging.error(f"Error processing page: {e}")
-    return results
-
-def main():
-    parser = argparse.ArgumentParser(description='OCR a PDF file using EasyOCR')
-    parser.add_argument('pdf_path', type=str, help='Path to the PDF file')
-    parser.add_argument('--chunk-size', type=int, default=100,
-                        help='Number of pages to process in each chunk')
-    args = parser.parse_args()
-
-    pdf_path = Path(args.pdf_path)
-    if not pdf_path.exists():
-        print(f"Error: File {pdf_path} does not exist")
-        sys.exit(1)
-
-    setup_logging()
-    logging.info(f"Starting OCR process for {pdf_path}")
-
-    # Create output directory
-    output_dir = pdf_path.parent / f"{pdf_path.stem}_ocr_results"
-    output_dir.mkdir(exist_ok=True)
-
-    reader = PdfReader(str(pdf_path))
-    total_pages = len(reader.pages)
-    
-    with tqdm(total=total_pages) as pbar:
-        for start_page in range(1, total_pages + 1, args.chunk_size):
-            chunk_size = min(args.chunk_size, total_pages - start_page + 1)
-            chunk_output = output_dir / f"chunk_{start_page:06d}.txt"
-            
-            if chunk_output.exists():
-                logging.info(f"Skipping existing chunk {start_page}")
-                pbar.update(chunk_size)
-                continue
-
-            try:
-                results = process_chunk(str(pdf_path), start_page, chunk_size)
-                
-                # Save results
-                with open(chunk_output, 'w', encoding='utf-8') as f:
-                    for page_num, page_results in enumerate(results, start_page):
-                        f.write(f"=== Page {page_num} ===\n")
-                        for text_result in page_results:
-                            f.write(f"{text_result[1]}\n")
-                        f.write("\n")
-                
-                pbar.update(chunk_size)
-                logging.info(f"Completed chunk starting at page {start_page}")
-                
-            except Exception as e:
-                logging.error(f"Failed to process chunk starting at page {start_page}: {e}")
-                continue
-
-    logging.info("OCR process complete")
-
-if __name__ == '__main__':
-    main()
-
diff --git a/ollapull b/ollapull
deleted file mode 100755
index e77d776..0000000
--- a/ollapull
+++ /dev/null
@@ -1,8 +0,0 @@
-#!/usr/bin/env bash
-# Pull the latest version of every locally installed model:
-
-ollama ls | tail -n +2 | awk '{print $1}' | while read MODEL; do
-  echo "Pulling latest for $MODEL..."
-  ollama pull "$MODEL"
-done
-
diff --git a/pf b/pf
deleted file mode 100755
index ebc8ac0..0000000
--- a/pf
+++ /dev/null
@@ -1,75 +0,0 @@
-#!/usr/bin/python3
-import socket
-import threading
-import select
-
-def forward(source, destination):
-    try:
-        while True:
-            ready, _, _ = select.select([source], [], [], 1)
-            if ready:
-                data = source.recv(4096)
-                if not data:
-                    break
-                destination.sendall(data)
-    except (OSError, socket.error) as e:
-        print(f"Connection error: {e}")
-    finally:
-        try:
-            source.shutdown(socket.SHUT_RD)
-        except OSError:
-            pass
-        try:
-            destination.shutdown(socket.SHUT_WR)
-        except OSError:
-            pass
-
-def handle(client_socket, remote_host, remote_port):
-    try:
-        remote_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
-        remote_socket.connect((remote_host, remote_port))
-        
-        thread1 = threading.Thread(target=forward, args=(client_socket, remote_socket))
-        thread2 = threading.Thread(target=forward, args=(remote_socket, client_socket))
-        
-        thread1.start()
-        thread2.start()
-
-        thread1.join()
-        thread2.join()
-    except Exception as e:
-        print(f"Error in handle: {e}")
-    finally:
-        client_socket.close()
-        remote_socket.close()
-
-def create_forwarder(local_host, local_port, remote_host, remote_port):
-    server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
-    server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
-    server_socket.bind((local_host, local_port))
-    server_socket.listen(5)
-    
-    print(f"Forwarding {local_host}:{local_port} to {remote_host}:{remote_port}")
-
-    while True:
-        try:
-            client_socket, address = server_socket.accept()
-            print(f"Received connection from {address}")
-            threading.Thread(target=handle, args=(client_socket, remote_host, remote_port)).start()
-        except Exception as e:
-            print(f"Error accepting connection: {e}")
-
-def main():
-    listen_ip = '0.0.0.0'
-    
-    imap_thread = threading.Thread(target=create_forwarder, args=(listen_ip, 1143, '127.0.0.1', 1142))
-    imap_thread.start()
-
-    smtp_thread = threading.Thread(target=create_forwarder, args=(listen_ip, 1025, '127.0.0.1', 1024))
-    smtp_thread.start()
-
-    imap_thread.join()
-    smtp_thread.join()
-
-if __name__ == "__main__":
-    main()
\ No newline at end of file
diff --git a/pippin b/pippin
deleted file mode 100755
index 9075300..0000000
--- a/pippin
+++ /dev/null
@@ -1,34 +0,0 @@
-#!/bin/bash
-
-# Check if an argument is provided
-if [ $# -eq 0 ]; then
-    echo "Usage: $0 <conda_environment_name>"
-    exit 1
-fi
-
-# Get the conda environment name from the command line argument
-env_name="$1"
-
-# Check if the conda environment already exists
-if ! conda info --envs | grep -q "^$env_name "; then
-    echo "Creating new conda environment: $env_name"
-    conda create -n "$env_name" python=3.9 -y
-else
-    echo "Conda environment '$env_name' already exists"
-fi
-
-# Activate the conda environment
-eval "$(conda shell.bash hook)"
-conda activate "$env_name"
-
-# Get the path to the conda environment's python binary
-conda_python=$(which python)
-
-# Recursively search for requirements.txt files and install dependencies
-find . -name "requirements.txt" | while read -r req_file; do
-    echo "Installing requirements from: $req_file"
-    "$conda_python" -m pip install -r "$req_file"
-done
-
-echo "All requirements.txt files processed."
-
diff --git a/pull b/pull
deleted file mode 100755
index f50896d..0000000
--- a/pull
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-
-# Path to the file containing the list of repositories
-REPOS_FILE="$HOME/.repos.txt"
-
-# Check if the repos file exists
-if [ ! -f "$REPOS_FILE" ]; then
-    echo "Error: $REPOS_FILE does not exist in the current directory."
-    exit 1
-fi
-
-# Read the repos file and process each directory
-while IFS= read -r repo_path || [[ -n "$repo_path" ]]; do
-    # Trim whitespace
-    repo_path=$(echo "$repo_path" | xargs)
-
-    # Skip empty lines and lines starting with #
-    [[ -z "$repo_path" || "$repo_path" == \#* ]] && continue
-
-    # Expand tilde to home directory
-    repo_path="${repo_path/#\~/$HOME}"
-
-    echo "Processing repository: $repo_path"
-
-    # Navigate to the project directory
-    if ! cd "$repo_path"; then
-        echo "Error: Unable to change to directory $repo_path. Skipping."
-        continue
-    fi
-
-    # Check if it's a git repository
-    if [ ! -d .git ]; then
-        echo "Warning: $repo_path is not a git repository. Skipping."
-        continue
-    fi
-
-    # Force pull the latest changes from the repository
-    echo "Force pulling latest changes..."
-    git pull --force
-
-    # Return to the original directory
-    cd - > /dev/null
-
-    echo "Update complete for $repo_path"
-    echo "----------------------------------------"
-done < "$REPOS_FILE"
-
-echo "All repositories processed."
diff --git a/push b/push
deleted file mode 100755
index 237ec00..0000000
--- a/push
+++ /dev/null
@@ -1,87 +0,0 @@
-#!/bin/bash
-
-# Path to the file containing the list of repositories
-REPOS_FILE="$HOME/.repos.txt"
-
-# Check if the repos file exists
-if [ ! -f "$REPOS_FILE" ]; then
-    echo "Error: $REPOS_FILE does not exist."
-    exit 1
-fi
-
-# Read the repos file and process each directory
-while IFS= read -r repo_path || [[ -n "$repo_path" ]]; do
-    # Trim whitespace
-    repo_path=$(echo "$repo_path" | xargs)
-
-    # Skip empty lines and lines starting with #
-    [[ -z "$repo_path" || "$repo_path" == \#* ]] && continue
-
-    # Expand tilde to home directory
-    repo_path="${repo_path/#\~/$HOME}"
-
-    # Check if the directory exists
-    if [ ! -d "$repo_path" ]; then
-        echo "Warning: Directory $repo_path does not exist. Skipping."
-        continue
-    fi
-
-    echo "Processing repository: $repo_path"
-
-    # Navigate to the project directory
-    cd "$repo_path" || { echo "Error: Unable to change to directory $repo_path"; continue; }
-
-    # Check if it's a git repository
-    if [ ! -d .git ]; then
-        echo "Warning: $repo_path is not a git repository. Skipping."
-        continue
-    fi
-
-    # Check if 'origin' remote exists
-    if ! git remote | grep -q '^origin$'; then
-        echo "Remote 'origin' not found. Attempting to set it up..."
-        # Try to guess the remote URL based on the directory name
-        repo_name=$(basename "$repo_path")
-        remote_url="https://git.sij.ai/sij/$repo_name.git"
-        git remote add origin "$remote_url"
-        echo "Added remote 'origin' with URL: $remote_url"
-    fi
-
-    # Get the current branch
-    current_branch=$(git rev-parse --abbrev-ref HEAD)
-
-    # Pull the latest changes from the repository
-    echo "Pulling from $current_branch branch..."
-    if ! git pull origin "$current_branch"; then
-        echo "Failed to pull from origin. The remote branch might not exist or there might be conflicts."
-        echo "Skipping further operations for this repository."
-        continue
-    fi
-
-    # Add changes to the Git index (staging area)
-    echo "Adding all changes..."
-    git add .
-
-    # Check if there are changes to commit
-    if git diff-index --quiet HEAD --; then
-        echo "No changes to commit."
-    else
-        # Commit changes
-        echo "Committing changes..."
-        git commit -m "Auto-update: $(date)"
-
-        # Push changes to the remote repository
-        echo "Pushing all changes..."
-        if ! git push origin "$current_branch"; then
-            echo "Failed to push changes. The remote branch might not exist."
-            echo "Creating remote branch and pushing..."
-            git push -u origin "$current_branch"
-        fi
-    fi
-
-    echo "Update complete for $repo_path!"
-    echo "----------------------------------------"
-done < "$REPOS_FILE"
-
-echo "All repositories processed."
-
diff --git a/serv b/serv
deleted file mode 100755
index d17d54c..0000000
--- a/serv
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/bin/bash
-
-# Check if an executable is provided as an argument
-if [ -z "$1" ]; then
-  echo "Usage: $0 <executable>"
-  exit 1
-fi
-
-# Find the executable path using 'which'
-EXEC_PATH=$(which "$1")
-
-# Check if the executable exists
-if [ -z "$EXEC_PATH" ]; then
-  echo "Error: Executable '$1' not found."
-  exit 1
-fi
-
-# Get the executable name
-EXEC_NAME=$(basename "$EXEC_PATH")
-
-# Create the launchd plist file content
-PLIST_FILE_CONTENT="<?xml version=\"1.0\" encoding=\"UTF-8\"?>
-<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">
-<plist version=\"1.0\">
-<dict>
-  <key>Label</key>
-  <string>$EXEC_NAME</string>
-  <key>ProgramArguments</key>
-  <array>
-    <string>$EXEC_PATH</string>
-  </array>
-  <key>KeepAlive</key>
-  <true/>
-  <key>RunAtLoad</key>
-  <true/>
-</dict>
-</plist>"
-
-# Create the launchd plist file
-PLIST_FILE="$HOME/Library/LaunchAgents/$EXEC_NAME.plist"
-echo "$PLIST_FILE_CONTENT" > "$PLIST_FILE"
-
-# Load the launchd service
-launchctl load "$PLIST_FILE"
-
-echo "Service '$EXEC_NAME' has been created and loaded."
diff --git a/sij b/sij
deleted file mode 100755
index 88551bd..0000000
--- a/sij
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-
-# Set the path to the script
-SCRIPT_PATH="$HOME/workshop/sijapi/sijapi/helpers/start.py"
-
-# Check if the script exists
-if [ ! -f "$SCRIPT_PATH" ]; then
-    echo "Error: Script not found at $SCRIPT_PATH"
-    exit 1
-fi
-
-# Set up the environment
-source "$HOME/workshop/sijapi/sijapi/config/.env"
-
-# Activate the conda environment (adjust the path if necessary)
-source "$HOME/miniforge3/bin/activate" sijapi
-
-# Run the Python script with all command line arguments
-python "$SCRIPT_PATH" "$@"
-
-# Deactivate the conda environment
-conda deactivate
-
diff --git a/tablemd b/tablemd
deleted file mode 100755
index c5c8fd4..0000000
--- a/tablemd
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/usr/bin/env python3
-
-import sys
-import re
-
-def to_markdown_table(text):
-    lines = text.strip().split('\n')
-    
-    # Using regex to split while preserving multi-word columns
-    pattern = r'\s{2,}'  # Two or more spaces
-    rows = [re.split(pattern, line.strip()) for line in lines]
-    
-    # Create the markdown header row
-    header = ' | '.join(rows[0])
-    # Create separator row with correct number of columns
-    separator = ' | '.join(['---'] * len(rows[0]))
-    # Create data rows
-    data_rows = [' | '.join(row) for row in rows[1:]]
-    
-    # Combine all parts
-    return f"| {header} |\n| {separator} |\n" + \
-           '\n'.join(f"| {row} |" for row in data_rows)
-
-print(to_markdown_table(sys.stdin.read()))
-
diff --git a/tmux_merge b/tmux_merge
deleted file mode 100755
index 50ec168..0000000
--- a/tmux_merge
+++ /dev/null
@@ -1,41 +0,0 @@
-#!/bin/bash
-
-# Get the first session as the target for all panes
-target_session=$(tmux list-sessions -F '#{session_name}' | head -n 1)
-target_window="${target_session}:0" # assuming the first window is index 0
-target_pane="${target_window}.0" # assuming the first pane is index 0
-
-# Loop through each session
-tmux list-sessions -F '#{session_name}' | while read session; do
-    # Skip the target session
-    if [[ "$session" == "$target_session" ]]; then
-        continue
-    fi
-
-    # Loop through each window in the session
-    tmux list-windows -t "$session" -F '#{window_index}' | while read window; do
-        # Loop through each pane in the window
-        tmux list-panes -t "${session}:${window}" -F '#{pane_index}' | while read pane; do
-            source="${session}:${window}.${pane}"
-            # Check if the source is not the same as the target
-            if [[ "$source" != "$target_pane" ]]; then
-                # Join the pane to the target pane
-                tmux join-pane -s "$source" -t "$target_pane"
-            fi
-        done
-    done
-
-    # After moving all panes from a session, kill the now-empty session
-    # Check if the session to be killed is not the target session
-    if [[ "$session" != "$target_session" ]]; then
-        tmux kill-session -t "$session"
-    fi
-done
-
-# After moving all panes, you may want to manually adjust the layout.
-# For a simple automatic layout adjustment, you can use:
-tmux select-layout -t "$target_window" tiled
-
-# Attach to the master session after everything is merged
-tmux attach-session -t "$target_session"
-
diff --git a/txt_line_merge_abc b/txt_line_merge_abc
deleted file mode 100755
index d6487ea..0000000
--- a/txt_line_merge_abc
+++ /dev/null
@@ -1,37 +0,0 @@
-#!/usr/bin/env python3
-
-import sys
-
-def merge_files(file_paths):
-    if not file_paths:
-        print("At least one file path is required.")
-        return
-
-    # Read all lines from all files, including the first one
-    all_lines = set()
-    for file_path in file_paths:
-        with open(file_path, 'r') as f:
-            all_lines.update(f.read().splitlines())
-
-    # Sort the unique lines
-#    sorted_lines = sorted(all_lines)
-    sorted_lines = sorted(all_lines, key=str.lower)
-
-
-    # Write the sorted, unique lines to the first file, overwriting its contents
-    with open(file_paths[0], 'w') as f:
-        for line in sorted_lines:
-            f.write(line + '\n')
-
-    print(f"Merged {len(file_paths)} files into {file_paths[0]}")
-
-if __name__ == "__main__":
-    # Get file paths from command line arguments
-    file_paths = sys.argv[1:]
-
-    if not file_paths:
-        print("Usage: txt-line-merge-abc file1.txt file2.txt file3.txt ...")
-    else:
-        merge_files(file_paths)
-
-
diff --git a/txtsort b/txtsort
deleted file mode 100755
index a2235aa..0000000
--- a/txtsort
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/bash
-
-# Checking if the user provided a file name
-if [ $# -ne 1 ]; then
-  echo "Usage: $0 filename"
-  exit 1
-fi
-
-# Checking if the given file is readable
-if ! [ -r "$1" ]; then
-  echo "The file '$1' is not readable or does not exist."
-  exit 1
-fi
-
-sort $1
-
diff --git a/uninstall b/uninstall
deleted file mode 100755
index 949f473..0000000
--- a/uninstall
+++ /dev/null
@@ -1,118 +0,0 @@
-#!/bin/bash
-
-# Required parameters:
-# @raycast.schemaVersion 1
-# @raycast.title Uninstall App
-# @raycast.mode fullOutput
-
-# Optional parameters:
-# @raycast.icon πŸ—‘οΈ
-# @raycast.argument1 { "type": "text", "placeholder": "App name" }
-
-# Documentation:
-# @raycast.description Move an application and its related files to the Trash (no interactive prompts)
-
-########################################
-# Moves a file to the Trash via AppleScript
-########################################
-move_to_trash() {
-    local file_path="$1"
-    osascript -e "tell application \"Finder\" to delete POSIX file \"$file_path\"" >/dev/null 2>&1
-}
-
-########################################
-# Uninstall the specified app name
-########################################
-uninstall_app() {
-    local input="$1"
-
-    # Ensure we have a .app extension
-    if [[ ! "$input" =~ \.app$ ]]; then
-        input="${input}.app"
-    fi
-
-    ########################################
-    # 1) Spotlight exact-match search
-    ########################################
-    local app_paths
-    app_paths=$(mdfind "kMDItemKind == 'Application' && kMDItemDisplayName == '$input'")
-
-    # 2) If nothing found, attempt partial-match on the base name (e.g. "Element")
-    if [ -z "$app_paths" ]; then
-        app_paths=$(mdfind "kMDItemKind == 'Application' && kMDItemDisplayName == '*${input%.*}*'")
-    fi
-
-    # 3) If still empty, bail out
-    if [ -z "$app_paths" ]; then
-        echo "Application not found. Please check the name and try again."
-        return 1
-    fi
-
-    ########################################
-    # Filter results to prefer /Applications
-    ########################################
-    # Turn multi-line results into an array
-    IFS=$'\n' read -rd '' -a all_matches <<< "$app_paths"
-
-    # We'll pick the match in /Applications if it exists.
-    local chosen=""
-    for path in "${all_matches[@]}"; do
-        if [[ "$path" == "/Applications/"* ]]; then
-            chosen="$path"
-            break
-        fi
-    done
-
-    # If no match was in /Applications, just pick the first one
-    if [ -z "$chosen" ]; then
-        chosen="${all_matches[0]}"
-    fi
-
-    # Show which one we're uninstalling
-    echo "Uninstalling: $chosen"
-
-    ########################################
-    # Move the .app bundle to Trash
-    ########################################
-    move_to_trash "$chosen" 
-    echo "Moved $chosen to Trash."
-
-    ########################################
-    # Find bundle identifier for deeper cleanup
-    ########################################
-    local app_identifier
-    app_identifier=$(mdls -name kMDItemCFBundleIdentifier -r "$chosen")
-
-    echo "Removing related files..."
-
-    if [ -n "$app_identifier" ]; then
-        # Remove anything matching the bundle identifier
-        find /Library/Application\ Support \
-             /Library/Caches \
-             /Library/Preferences \
-             ~/Library/Application\ Support \
-             ~/Library/Caches \
-             ~/Library/Preferences \
-             -name "*$app_identifier*" -maxdepth 1 -print0 2>/dev/null \
-        | while IFS= read -r -d '' file; do
-            move_to_trash "$file"
-        done
-    else
-        # Fall back to removing by the app's base name
-        local base_name="${input%.app}"
-        find /Library/Application\ Support \
-             /Library/Caches \
-             /Library/Preferences \
-             ~/Library/Application\ Support \
-             ~/Library/Caches \
-             ~/Library/Preferences \
-             -name "*$base_name*" -maxdepth 1 -print0 2>/dev/null \
-        | while IFS= read -r -d '' file; do
-            move_to_trash "$file"
-        done
-    fi
-
-    echo "Uninstallation complete."
-}
-
-uninstall_app "$1"
diff --git a/vitals b/vitals
deleted file mode 100755
index 8ea3442..0000000
--- a/vitals
+++ /dev/null
@@ -1,130 +0,0 @@
-#!/bin/bash
-
-# Create a DNS rewrite rule in AdGuard home that assigns 'check.adguard.test'
-# to an IP address beginning '100.', such as the Tailscale IP of your server.
-# Alternatively, you can change the adguard_test_domain to whatever you like,
-# so long as it matches the domain of a DNS rewrite rule you created in AGH.
-
-adguard_test_domain='check.adguard.test'
-
-if [[ "$(uname)" == "Darwin" ]]; then
-    # macOS
-    local_ip=$(ifconfig | grep "inet " | grep -v 127.0.0.1 | awk '{print $2}' | head -n1)
-    uptime_seconds=$(sysctl -n kern.boottime | awk '{print $4}' | sed 's/,//')
-    current_time=$(date +%s)
-    uptime_seconds=$((current_time - uptime_seconds))
-    days=$((uptime_seconds / 86400))
-    hours=$(( (uptime_seconds % 86400) / 3600 ))
-    minutes=$(( (uptime_seconds % 3600) / 60 ))
-    uptime="up "
-    [[ $days -gt 0 ]] && uptime+="$days days, "
-    [[ $hours -gt 0 ]] && uptime+="$hours hours, "
-    uptime+="$minutes minutes"
-else
-    # Linux
-    local_ip=$(hostname -I | awk '{print $1}')
-    uptime=$(uptime -p)
-fi
-
-wan_info=$(curl -s --max-time 10 https://am.i.mullvad.net/json)
-wan_connected=false
-if [ ! -z "$wan_info" ]; then
-  wan_connected=true
-  wan_ip=$(echo "$wan_info" | jq -r '.ip')
-  mullvad_exit_ip=$(echo "$wan_info" | jq '.mullvad_exit_ip')
-  blacklisted=$(echo "$wan_info" | jq '.blacklisted.blacklisted')
-else
-  wan_ip="Unavailable"
-  mullvad_exit_ip=false
-  blacklisted=false
-fi
-
-# Check if Tailscale is installed and get IP
-if command -v tailscale &> /dev/null; then
-  has_tailscale=true
-  tailscale_ip=$(tailscale ip -4)
-  # Get Tailscale exit-node information
-  ts_exitnode_output=$(tailscale exit-node list)
-  # Parse exit node hostname
-  if echo "$ts_exitnode_output" | grep -q 'selected'; then
-    mullvad_exitnode=true
-    # Extract the hostname of the selected exit node, taking only the part before any newline
-    mullvad_hostname=$(echo "$ts_exitnode_output" | grep 'selected' | awk '{print $2}' | awk -F'\n' '{print $1}')
-  else
-    mullvad_exitnode=false
-    mullvad_hostname=""
-  fi
-else
-  has_tailscale=false
-  tailscale_ip="Not installed"
-  mullvad_exitnode=false
-  mullvad_hostname=""
-fi
-
-nextdns_info=$(curl -sL --max-time 10 https://test.nextdns.io)
-if [ -z "$nextdns_info" ]; then
-  echo "Failed to fetch NextDNS status or no internet connection." >&2
-  nextdns_connected=false
-  nextdns_protocol=""
-  nextdns_client=""
-else
-  nextdns_status=$(echo "$nextdns_info" | jq -r '.status')
-  if [ "$nextdns_status" = "ok" ]; then
-    nextdns_connected=true
-    nextdns_protocol=$(echo "$nextdns_info" | jq -r '.protocol')
-    nextdns_client=$(echo "$nextdns_info" | jq -r '.clientName')
-  else
-    nextdns_connected=false
-    nextdns_protocol=""
-    nextdns_client=""
-  fi
-fi
-
-# Check AdGuard Home DNS
-resolved_ip=$(dig +short $adguard_test_domain)
-if [[ $resolved_ip =~ ^100\. ]]; then
-  adguard_connected=true
-  adguard_protocol="AdGuard Home"
-  adguard_client="$resolved_ip"
-else
-  adguard_connected=false
-  adguard_protocol=""
-  adguard_client=""
-fi
-
-# Output JSON using jq for proper formatting and escaping
-jq -n \
---arg local_ip "$local_ip" \
---argjson wan_connected "$wan_connected" \
---arg wan_ip "$wan_ip" \
---argjson has_tailscale "$has_tailscale" \
---arg tailscale_ip "$tailscale_ip" \
---argjson mullvad_exitnode "$mullvad_exitnode" \
---arg mullvad_hostname "$mullvad_hostname" \
---argjson mullvad_exit_ip "$mullvad_exit_ip" \
---argjson blacklisted "$blacklisted" \
---argjson nextdns_connected "$nextdns_connected" \
---arg nextdns_protocol "$nextdns_protocol" \
---arg nextdns_client "$nextdns_client" \
---argjson adguard_connected "$adguard_connected" \
---arg adguard_protocol "$adguard_protocol" \
---arg adguard_client "$adguard_client" \
---arg uptime "$uptime" \
-'{
-  local_ip: $local_ip,
-  wan_connected: $wan_connected,
-  wan_ip: $wan_ip,
-  has_tailscale: $has_tailscale,
-  tailscale_ip: $tailscale_ip,
-  mullvad_exitnode: $mullvad_exitnode,
-  mullvad_hostname: $mullvad_hostname,
-  mullvad_exit_ip: $mullvad_exit_ip,
-  blacklisted: $blacklisted,
-  nextdns_connected: $nextdns_connected,
-  nextdns_protocol: $nextdns_protocol,
-  nextdns_client: $nextdns_client,
-  adguard_connected: $adguard_connected,
-  adguard_protocol: $adguard_protocol,
-  adguard_client: $adguard_client,
-  uptime: $uptime
-}'
diff --git a/vpn b/vpn
deleted file mode 100755
index f4fa222..0000000
--- a/vpn
+++ /dev/null
@@ -1,456 +0,0 @@
-#!/usr/bin/env python3
-
-import subprocess
-import requests
-import argparse
-import json
-import random
-import datetime
-import os
-
-LOG_FILE = '/var/log/vpn_rotation.txt'
-
-PRIVACY_FRIENDLY_COUNTRIES = [
-    'Finland',
-    'Germany',
-    'Iceland',
-    'Netherlands',
-    'Norway',
-    'Sweden',
-    'Switzerland'
-]
-
-TAILSCALE_ARGS = [
-    '--exit-node-allow-lan-access',
-    '--accept-dns',
-]
-
-def get_mullvad_info():
-    """Fetch JSON info from Mullvad's 'am.i.mullvad.net/json' endpoint."""
-    response = requests.get('https://am.i.mullvad.net/json')
-    if response.status_code != 200:
-        raise Exception("Could not fetch Mullvad info.")
-    return response.json()
-
-def get_current_exit_node():
-    """
-    Return the DNSName (e.g. 'de-ber-wg-001.mullvad.ts.net.') of whichever
-    peer is currently acting as the exit node. Otherwise returns None.
-    """
-    result = subprocess.run(['tailscale', 'status', '--json'],
-                            capture_output=True, text=True)
-    if result.returncode != 0:
-        raise Exception("Failed to get Tailscale status")
-
-    status = json.loads(result.stdout)
-
-    # 'Peer' is a dict with keys like "nodekey:fe8efdbab7c2..."
-    peers = status.get('Peer', {})
-    for peer_key, peer_data in peers.items():
-        # If the node is currently the exit node, it should have "ExitNode": true
-        if peer_data.get('ExitNode') is True:
-            # Tailscale might return 'de-ber-wg-001.mullvad.ts.net.' with a trailing dot
-            dns_name = peer_data.get('DNSName', '')
-            dns_name = dns_name.rstrip('.')  # remove trailing dot
-            return dns_name
-    
-    # If we don't find any peer with ExitNode = true, there's no exit node
-    return None
-
-def list_exit_nodes():
-    """
-    Return a dict {node_name: country} of all available Tailscale exit nodes
-    based on 'tailscale exit-node list'.
-    The output lines typically look like:
-       <Star> <Name> <Country> <OS> ...
-    Example line: 
-       * de-dus-wg-001.mullvad.ts.net Germany linux ...
-    """
-    result = subprocess.run(['tailscale', 'exit-node', 'list'], capture_output=True, text=True)
-    if result.returncode != 0:
-        raise Exception("Failed to list Tailscale exit nodes")
-
-    exit_nodes = {}
-    for line in result.stdout.splitlines():
-        parts = line.split()
-        # Basic sanity check for lines that actually contain node info
-        if len(parts) > 3:
-            # parts[0] might be "*" if it's the current node
-            # parts[1] is typically the FQDN (like "de-dus-wg-001.mullvad.ts.net")
-            # parts[2] is the Country
-            node_name = parts[1].strip()
-            node_country = parts[2].strip()
-            exit_nodes[node_name] = node_country
-
-    return exit_nodes
-
-def write_log(
-    old_node=None, new_node=None,
-    old_ip=None, new_ip=None,
-    old_country=None, new_country=None
-):
-    """
-    Appends a line to the log file reflecting a connection change.
-    Example:
-        2025.01.17 01:11:33 UTC Β· disconnected from de-dus-wg-001.mullvad.ts.net (Germany)
-         Β· connected to at-vie-wg-001.mullvad.ts.net (Austria)
-         Β· changed IP from 65.21.99.202 to 185.213.155.74
-    If no old_node is specified, it indicates a fresh start (no disconnection).
-    If no new_node is specified, it indicates a stop (only disconnection).
-    """
-
-    utc_time = datetime.datetime.utcnow().strftime('%Y.%m.%d %H:%M:%S UTC')
-    log_parts = [utc_time]
-
-    # If old_node was present, mention disconnect
-    if old_node and old_country:
-        log_parts.append(f"disconnected from {old_node} ({old_country})")
-
-    # If new_node is present, mention connect
-    if new_node and new_country:
-        log_parts.append(f"connected to {new_node} ({new_country})")
-
-    # If IPs changed
-    if old_ip and new_ip and old_ip != new_ip:
-        log_parts.append(f"changed IP from {old_ip} to {new_ip}")
-
-    line = " Β· ".join(log_parts)
-
-    # Append to file
-    with open(LOG_FILE, 'a') as f:
-        f.write(line + "\n")
-
-def get_connection_history():
-    """
-    Returns an in-memory list of parsed log lines. 
-    Each item looks like:
-        {
-            'timestamp': datetime_object,
-            'disconnected_node': '...',
-            'disconnected_country': '...',
-            'connected_node': '...',
-            'connected_country': '...',
-            'old_ip': '...',
-            'new_ip': '...'
-        }
-    """
-    entries = []
-    if not os.path.isfile(LOG_FILE):
-        return entries
-
-    with open(LOG_FILE, 'r') as f:
-        lines = f.readlines()
-
-    for line in lines:
-        # Example line:
-        # 2025.01.17 01:11:33 UTC Β· disconnected from de-dus-wg-001.mullvad.ts.net (Germany) Β· connected to ...
-        # We'll parse step by step, mindful that each line can have different combos.
-        parts = line.strip().split(" Β· ")
-        if not parts:
-            continue
-
-        # parts[0] => '2025.01.17 01:11:33 UTC'
-        timestamp_str = parts[0]
-        connected_node = None
-        connected_country = None
-        disconnected_node = None
-        disconnected_country = None
-        old_ip = None
-        new_ip = None
-
-        # We parse the timestamp. We have '%Y.%m.%d %H:%M:%S UTC'
-        try:
-            dt = datetime.datetime.strptime(timestamp_str, '%Y.%m.%d %H:%M:%S UTC')
-        except ValueError:
-            continue  # If it doesn't parse, skip.
-
-        for p in parts[1:]:
-            p = p.strip()
-            if p.startswith("disconnected from"):
-                # e.g. "disconnected from de-dus-wg-001.mullvad.ts.net (Germany)"
-                # We can split on "("
-                disc_info = p.replace("disconnected from ", "")
-                if "(" in disc_info and disc_info.endswith(")"):
-                    node = disc_info.split(" (")[0]
-                    country = disc_info.split(" (")[1].replace(")", "")
-                    disconnected_node = node
-                    disconnected_country = country
-            elif p.startswith("connected to"):
-                # e.g. "connected to at-vie-wg-001.mullvad.ts.net (Austria)"
-                conn_info = p.replace("connected to ", "")
-                if "(" in conn_info and conn_info.endswith(")"):
-                    node = conn_info.split(" (")[0]
-                    country = conn_info.split(" (")[1].replace(")", "")
-                    connected_node = node
-                    connected_country = country
-            elif p.startswith("changed IP from"):
-                # e.g. "changed IP from 65.21.99.202 to 185.213.155.74"
-                # We'll split on spaces
-                # changed IP from 65.21.99.202 to 185.213.155.74
-                # index:     0     1  2        3           4
-                ip_parts = p.split()
-                if len(ip_parts) >= 5:
-                    old_ip = ip_parts[3]
-                    new_ip = ip_parts[5]
-
-        entries.append({
-            'timestamp': dt,
-            'disconnected_node': disconnected_node,
-            'disconnected_country': disconnected_country,
-            'connected_node': connected_node,
-            'connected_country': connected_country,
-            'old_ip': old_ip,
-            'new_ip': new_ip
-        })
-
-    return entries
-
-def get_last_connection_entry():
-    """
-    Parse the log and return the last entry that actually
-    has a 'connected_node', which indicates a stable connection.
-    """
-    history = get_connection_history()
-    # Go in reverse chronological order
-    for entry in reversed(history):
-        if entry['connected_node']:
-            return entry
-    return None
-
-def set_exit_node(exit_node):
-    """
-    Generic helper to set Tailscale exit node to 'exit_node'.
-    Returns (old_ip, new_ip, old_node, new_node, old_country, new_country)
-    """
-    # Get old info for logging
-    old_info = get_mullvad_info()
-    old_ip = old_info.get('ip')
-    old_country = old_info.get('country')
-    old_node = get_current_exit_node()  # might be None
-
-    cmd = ['tailscale', 'set', f'--exit-node={exit_node}'] + TAILSCALE_ARGS
-    subprocess.run(cmd, check=True)
-
-    # Verify the new node
-    new_info = get_mullvad_info()
-    new_ip = new_info.get('ip')
-    new_country = new_info.get('country')
-    new_node = exit_node
-
-    return old_ip, new_ip, old_node, new_node, old_country, new_country
-
-def unset_exit_node():
-    """
-    Unset Tailscale exit node.
-    """
-    # For logging, we still want old IP + new IP. The 'new' IP after unsetting might revert to local.
-    old_info = get_mullvad_info()
-    old_ip = old_info.get('ip')
-    old_country = old_info.get('country')
-    old_node = get_current_exit_node()
-
-    cmd = ['tailscale', 'set', '--exit-node='] + TAILSCALE_ARGS
-    subprocess.run(cmd, check=True)
-
-    # Now see if the IP changed
-    new_info = get_mullvad_info()
-    new_ip = new_info.get('ip')
-    new_country = new_info.get('country')
-    new_node = None
-
-    write_log(old_node, new_node, old_ip, new_ip, old_country, new_country)
-    print("Exit node unset successfully!")
-
-def start_exit_node():
-    """
-    Start the exit node if none is currently set.
-    Otherwise, report what is already set.
-    """
-    current_exit_node = get_current_exit_node()
-    if current_exit_node:
-        print(f"Already connected to exit node: {current_exit_node}")
-    else:
-        # Use the default "tailscale exit-node suggest" approach
-        result = subprocess.run(['tailscale', 'exit-node', 'suggest'], capture_output=True, text=True)
-        if result.returncode != 0:
-            raise Exception("Failed to run 'tailscale exit-node suggest'")
-
-        suggested = ''
-        for line in result.stdout.splitlines():
-            if 'Suggested exit node' in line:
-                suggested = line.split(': ')[1].strip()
-                break
-
-        if not suggested:
-            raise Exception("No suggested exit node found.")
-
-        (old_ip, new_ip,
-         old_node, new_node,
-         old_country, new_country) = set_exit_node(suggested)
-
-        # Log it
-        write_log(old_node, new_node, old_ip, new_ip, old_country, new_country)
-        print(f"Exit node set successfully to {new_node}")
-
-def set_random_privacy_friendly_exit_node():
-    """
-    Pick a random node from PRIVACY_FRIENDLY_COUNTRIES and set it.
-    """
-    # Filter exit nodes by known privacy-friendly countries
-    nodes = list_exit_nodes()
-    # nodes is dict {node_name: country}
-    pf_nodes = [n for n, c in nodes.items() if c in PRIVACY_FRIENDLY_COUNTRIES]
-
-    if not pf_nodes:
-        raise Exception("No privacy-friendly exit nodes available")
-
-    exit_node = random.choice(pf_nodes)
-    (old_ip, new_ip,
-     old_node, new_node,
-     old_country, new_country) = set_exit_node(exit_node)
-
-    # Log
-    write_log(old_node, new_node, old_ip, new_ip, old_country, new_country)
-    print(f"Selected random privacy-friendly exit node: {exit_node}")
-    print("Exit node set successfully!")
-
-def set_random_exit_node_in_country(country_input):
-    """
-    Pick a random node in the given (case-insensitive) country_input.
-    Then set the exit node to that node.
-    """
-    country_input_normalized = country_input.strip().lower()
-
-    all_nodes = list_exit_nodes()
-    # Filter nodes in the user-requested country
-    country_nodes = [
-        node_name for node_name, node_country in all_nodes.items()
-        if node_country.lower() == country_input_normalized
-    ]
-
-    if not country_nodes:
-        raise Exception(f"No exit nodes found in {country_input}.")
-
-    exit_node = random.choice(country_nodes)
-
-    (old_ip, new_ip,
-     old_node, new_node,
-     old_country, new_country) = set_exit_node(exit_node)
-
-    # Log
-    write_log(old_node, new_node, old_ip, new_ip, old_country, new_country)
-    print(f"Selected random exit node in {country_input.title()}: {exit_node}")
-    print("Exit node set successfully!")
-
-def get_status():
-    """
-    Print current connection status:
-    - Whether connected or not
-    - Current exit node and IP
-    - Country of that exit node
-    - How long it has been connected to that exit node (based on the last log entry)
-    """
-    current_node = get_current_exit_node()
-    if not current_node:
-        print("No exit node is currently set.")
-        return
-
-    # Current IP & country
-    info = get_mullvad_info()
-    current_ip = info.get('ip')
-    current_country = info.get('country')
-
-    # Find the last time we connected to this node in the log
-    history = get_connection_history()
-    # We look from the end backwards for an entry that connected to the current_node
-    connected_since = None
-    for entry in reversed(history):
-        if entry['connected_node'] == current_node:
-            connected_since = entry['timestamp']
-            break
-
-    # We'll compute a "connected for X minutes/hours/days" style message
-    if connected_since:
-        now_utc = datetime.datetime.utcnow()
-        delta = now_utc - connected_since
-        # For user-friendliness, just show something like 1h 12m, or 2d 3h
-        # We'll do a simple approach:
-        total_seconds = int(delta.total_seconds())
-        days = total_seconds // 86400
-        hours = (total_seconds % 86400) // 3600
-        minutes = (total_seconds % 3600) // 60
-
-        duration_parts = []
-        if days > 0:
-            duration_parts.append(f"{days}d")
-        if hours > 0:
-            duration_parts.append(f"{hours}h")
-        if minutes > 0:
-            duration_parts.append(f"{minutes}m")
-        if not duration_parts:
-            duration_parts.append("0m")  # means less than 1 minute
-
-        duration_str = " ".join(duration_parts)
-        print(f"Currently connected to: {current_node} ({current_country})")
-        print(f"IP: {current_ip}")
-        print(f"Connected for: {duration_str}")
-    else:
-        # If we never found it in the log, it's presumably a brand new connection
-        print(f"Currently connected to: {current_node} ({current_country})")
-        print(f"IP: {current_ip}")
-        print("Connected for: <unknown>, no log entry found.")
-
-if __name__ == "__main__":
-    parser = argparse.ArgumentParser(description='Manage VPN exit nodes.')
-    parser.add_argument(
-        'action',
-        choices=['start', 'stop', 'new', 'shh', 'to', 'status'],
-        help='Action to perform: start, stop, new, shh, to <country>, or status'
-    )
-    parser.add_argument(
-        'country',
-        nargs='?',
-        default=None,
-        help='Country name (used only with "to" mode).'
-    )
-
-    args = parser.parse_args()
-
-    if args.action == 'start':
-        start_exit_node()
-    elif args.action == 'stop':
-        unset_exit_node()
-    elif args.action == 'new':
-        # This calls set_exit_node() using the Tailscale "suggest" approach
-        # from the original script
-        result = subprocess.run(['tailscale', 'exit-node', 'suggest'], capture_output=True, text=True)
-        if result.returncode != 0:
-            raise Exception("Failed to run 'tailscale exit-node suggest'")
-
-        exit_node = ''
-        for line in result.stdout.splitlines():
-            if 'Suggested exit node' in line:
-                exit_node = line.split(': ')[1].strip()
-                break
-
-        if not exit_node:
-            raise Exception("No suggested exit node found.")
-
-        (old_ip, new_ip,
-         old_node, new_node,
-         old_country, new_country) = set_exit_node(exit_node)
-        write_log(old_node, new_node, old_ip, new_ip, old_country, new_country)
-        print(f"Exit node set to suggested node: {new_node}")
-
-    elif args.action == 'shh':
-        # Random privacy-friendly
-        set_random_privacy_friendly_exit_node()
-
-    elif args.action == 'to':
-        # "vpn to sweden" => pick a random node in Sweden
-        if not args.country:
-            raise Exception("You must specify a country. e.g. vpn to sweden")
-        set_random_exit_node_in_country(args.country)
-
-    elif args.action == 'status':
-        get_status()
diff --git a/z b/z
deleted file mode 100755
index 62c3a7c..0000000
--- a/z
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/zsh
-source ~/.zshenv
-source ~/.zprofile
-source ~/.zshrc
-