Categories
AI ChatGPT Code

ChatGPT GPT-4o

I found great utility from ChatGPT for coding learning and assistance. This seems to be quite a bit better in practice with GPT-4o.

I asked ChatGPT to create a swarm simulation using python, like the old ‘swarm’ screensaver. Then added a ‘hawk’ that catches them. When they are all caught, it restarts.

When it was done, I asked how to make it work in a browser. ChatGPT told me to use p5.js and rewrote the code for me. It took maybe 10 minutes to create the code… then far longer to get it sized and not ruining the formatting within WordPress.

Note that it should work okay on a newer cellphone, but will likely hide the sides and total number of birds left unless you run it in landscape more. It is really meant for a larger screen.

Swarm Simulation with Hawk

It is amusing to see the speed increase as the birds are consumed when running the python version.

Here is the python code for anyone interested. 'H' to turn on the hawk and 'C' to change from 'relocate' the bird when caught to 'consume'.

import pygame
import random
import numpy as np

# Constants
WIDTH, HEIGHT = 1200, 600  # Wider window
BIRD_SIZE = 3
HAWK_SIZE = 7
NUM_BIRDS = 150  # Increase number of birds initially
MAX_SPEED = 5
HISTORY_LENGTH = 20  # Length of the trail history

# More appealing colors - we will use a predefined palette
BIRD_COLORS = [
    (255, 109, 194),  # Pink
    (109, 218, 255),  # Light Blue
    (109, 255, 132)   # Light Green
]

# Different behaviors for different colors
BEHAVIORS = {
    (255, 109, 194): {'cohesion': 0.005, 'separation': 0.1, 'alignment': 0.05},
    (109, 218, 255): {'cohesion': 0.01, 'separation': 0.15, 'alignment': 0.1},
    (109, 255, 132): {'cohesion': 0.002, 'separation': 0.05, 'alignment': 0.03},
}
HAWK_SPEED = 7  # Faster hawk

# Initialize Pygame
pygame.init()
screen = pygame.display.set_mode((WIDTH, HEIGHT), pygame.RESIZABLE)  # Resizable screen
pygame.display.set_caption("Flock Simulation with Hawk")
font = pygame.font.SysFont(None, 36)  # Font for bird count

# Select the current behavior when the hawk catches a bird
CATCH_BEHAVIOR = 'relocate'  # Initial behavior

# Bird Class
class Bird:
    def __init__(self, x, y, color):
        self.position = np.array([x, y], dtype=float)
        self.velocity = np.random.rand(2) * 2 - 1  # Random direction
        self.velocity /= np.linalg.norm(self.velocity)
        self.velocity *= MAX_SPEED / 2  # Adjusting initial speed
        self.color = color
        self.behavior = BEHAVIORS[color]
        self.history = []  # To store position history for the trail

    def update(self, birds, hawk):
        self.flock(birds, hawk)
        self.position += self.velocity
        # Keep a copy of the previous position for boundary check
        prev_position = self.position.copy()
        # Screen looping
        self.position[0] %= WIDTH
        self.position[1] %= HEIGHT
        # Add current position to history and maintain length
        if len(self.history) >= HISTORY_LENGTH:
            self.history.pop(0)

        # Break the trail if the bird wraps around the screen
        if np.linalg.norm(self.position - prev_position) > MAX_SPEED:
            self.history.clear()

        self.history.append(self.position.copy())

    def flock(self, birds, hawk):
        sep = np.zeros(2)
        align = np.zeros(2)
        coh = np.zeros(2)
        count = 0

        for bird in birds:
            if bird != self:
                diff = bird.position - self.position
                distance = np.linalg.norm(diff)
                if distance < 50:
                    coh += bird.position
                    align += bird.velocity
                    count += 1
                    if distance < 20:
                        sep -= diff / distance

        if count > 0:
            coh = (coh / count - self.position) * self.behavior['cohesion']
            align = (align / count - self.velocity) * self.behavior['alignment']

        if hawk:
            diff = hawk.position - self.position
            distance = np.linalg.norm(diff)
            if distance < 10:  # Catching distance
                if CATCH_BEHAVIOR == 'relocate':
                    self.position = np.random.rand(2) * np.array([WIDTH, HEIGHT], dtype=float)
                    self.history.clear()  # Clear the history when relocating
                elif CATCH_BEHAVIOR == 'consume':
                    birds.remove(self)
                    return
            elif distance < 100:
                sep -= diff / distance * self.behavior['separation'] * 10

        self.velocity += coh + align + sep
        speed = np.linalg.norm(self.velocity)
        if speed > MAX_SPEED:
            self.velocity = self.velocity / speed * MAX_SPEED

# Hawk Class
class Hawk:
    def __init__(self):
        self.position = np.random.rand(2) * np.array([WIDTH, HEIGHT], dtype=float)
        self.velocity = np.random.rand(2) * 2 - 1
        self.velocity /= np.linalg.norm(self.velocity)
        self.velocity *= HAWK_SPEED

    def update(self, birds):
        if birds:
            closest_bird = min(birds, key=lambda bird: np.linalg.norm(bird.position - self.position))
            direction = closest_bird.position - self.position
            self.velocity = direction / np.linalg.norm(direction) * HAWK_SPEED
        
        self.position += self.velocity
        self.position[0] %= WIDTH
        self.position[1] %= HEIGHT

def draw_trail(screen, bird):
    for i in range(1, len(bird.history)):
        start_pos = bird.history[i - 1]
        end_pos = bird.history[i]
        alpha = int((i / HISTORY_LENGTH) * 255)
        color = (bird.color[0] * alpha // 255, bird.color[1] * alpha // 255, bird.color[2] * alpha // 255)
        pygame.draw.line(screen, color, start_pos, end_pos, BIRD_SIZE)

def main():
    global WIDTH, HEIGHT, screen, CATCH_BEHAVIOR  # Declare global variables here
    birds = [Bird(random.uniform(0, WIDTH), random.uniform(0, HEIGHT), random.choice(BIRD_COLORS)) for _ in range(NUM_BIRDS)]
    hawk = None
    running = True
    clock = pygame.time.Clock()

    while running:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                running = False
            elif event.type == pygame.KEYDOWN and event.key == pygame.K_h:
                if hawk is None:
                    hawk = Hawk()
                else:
                    hawk = None
            elif event.type == pygame.KEYDOWN and event.key == pygame.K_c:
                CATCH_BEHAVIOR = 'consume' if CATCH_BEHAVIOR == 'relocate' else 'relocate'
            elif event.type == pygame.VIDEORESIZE:
                WIDTH, HEIGHT = event.size
                screen = pygame.display.set_mode((WIDTH, HEIGHT), pygame.RESIZABLE)  # Update the screen with new size

        screen.fill((0, 0, 0))

        if hawk:
            hawk.update(birds)
            pygame.draw.circle(screen, (255, 0, 0), hawk.position.astype(int), HAWK_SIZE)

        for bird in birds:
            bird.update(birds, hawk)
            draw_trail(screen, bird)  # Draw the trail
            pygame.draw.circle(screen, bird.color, bird.position.astype(int), BIRD_SIZE)

        # Display bird count
        bird_count_text = font.render(f'Birds: {len(birds)}', True, (255, 255, 255))
        screen.blit(bird_count_text, (10, 10))

        pygame.display.flip()
        clock.tick(60)

    pygame.quit()

if __name__ == "__main__":
    main()
Categories
AI ChatGPT Code Photography

Adding EXIF data to an image with Windows command line

There was a post in a Facebook group ( https://www.facebook.com/groups/acdsee) asking how to add EXIF data to an image. The query was how to do this within ACDSee – this may be possible but I did not find the ability on a quick search.

Seemed easy enough to do via command line. I started a ChatGPT session to work on the problem; from prior tasks it seemed like imagemagick might be useful.

Installing imagemagick ( https://imagemagick.org/script/download.php#windows ) and running magick to see the output was helpful in confirming:

magick identify -verbose IMGRP_2024_03_27_9999_20_to_share_BW_blur_better.jpg
Image:
  Filename: IMGRP_2024_03_27_9999_20_to_share_BW_blur_better.jpg
  Permissions: rw-rw-rw-
  Format: JPEG (Joint Photographic Experts Group JFIF format)
  Mime type: image/jpeg
  Class: DirectClass
  Geometry: 4733x3787+0+0
(...snipping for brevity...)
  Properties:
    aux:Lens: RF100mm F2.8 L MACRO IS USM
    aux:SerialNumber: 432029000416
    date:create: 2024-04-26T01:23:20+00:00
    date:modify: 2024-04-26T01:23:20+00:00
    date:timestamp: 2024-04-26T01:42:15+00:00
    exif:ApertureValue: 327680/65536
    exif:Artist: Adrian Hensler
    exif:BodySerialNumber: 123456789
    exif:CameraOwnerName:
    exif:ColorSpace: 1
    exif:ComponentsConfiguration: .
    exif:Copyright:
    exif:CustomRendered: 0
    exif:DateTime: 2024:04:25 22:23:20
    exif:DateTimeDigitized: 2024:03:27 12:43:43
    exif:DateTimeOriginal: 2024:03:27 12:43:43
    exif:ExifOffset: 261
    exif:ExifVersion: 0231
    exif:ExposureBiasValue: 0/1
    exif:ExposureMode: 0
    exif:ExposureProgram: 3
    exif:ExposureTime: 1/200
    exif:Flash: 0
    exif:FlashPixVersion: 0100
    exif:FNumber: 56/10
    exif:FocalLength: 100/1
    exif:FocalPlaneResolutionUnit: 2
    exif:FocalPlaneXResolution: 6240000/1415
    exif:FocalPlaneYResolution: 4160000/943
    exif:GPSInfo: 6867
    exif:GPSVersionID: ....
    exif:LensModel: RF100mm F2.8 L MACRO IS USM
    exif:LensSerialNumber: 0710002371
    exif:LensSpecification: 100/1, 100/1, 100/1, 100/1
    exif:Make: Canon
    exif:MakerNote: 1
    exif:MeteringMode: 3
    exif:Model: Canon EOS RP
    exif:PhotographicSensitivity: 100
    exif:PixelXDimension: 4733
    exif:PixelYDimension: 3787
    exif:RecommendedExposureIndex: 100
    exif:SceneCaptureType: 0
    exif:SensitivityType: 2
    exif:ShutterSpeedValue: 499712/65536
    exif:Software: ACDSee Ultimate 2024
    exif:SubSecTime: 593
    exif:SubSecTimeDigitized: 80
    exif:SubSecTimeOriginal: 80
    exif:UserComment:
    exif:WhiteBalance: 0
    exif:YCbCrPositioning: 1

One issue I did not expect was finding that EXIF does not seem be be standardized between cameras; for my camera ISO is named “exif:PhotographicSensitivity”, but for other cameras it may be “exif:ISOSpeed” or “exif:ISOSpeedRatings”. This seems insane to me; I have not reviewed fully.

In any case, the correct value can be determined by reviewing output from your camata, so I continued.

Rather than sharing the full ChatGPT session as it is a bit jumbled, here is a summary of the ChatGPT session as compiled by ChatGPT. Ive edited for clarity (and to remove excess uses of the word ‘delve’).

Automating EXIF Data Annotation on Images with a Batch Script

Introduction

Photographers and digital artists often need to handle various metadata associated with their images, such as EXIF data, which contains valuable information about the shot, including camera settings. This blog post explores a practical scenario where we automate the annotation of such data directly onto images using a batch script. We’ll dig into the problem, describe the iterative approach taken during a ChatGPT session to refine a solution, and suggest future enhancements.

The Challenge

The task was to automate the embedding of EXIF data (such as camera model, exposure settings, ISO, etc.) onto images, positioning this data visually on the image itself.

Approach and Iteration with ChatGPT

The solution involved using command-line tools to automate the task, enhancing efficiency and scalability. The discussion began with an exploration of using ImageMagick, a powerful image manipulation tool. iteratively refined a Windows batch script to not only fetch EXIF data from images but also dynamically adjust text size based on image dimensions and directly annotate this data onto the images.

During the session, there were a few hurdles, such as handling different EXIF tags that varied by camera manufacturer and ensuring the text was legible on various image sizes. Each problem was tackled one at a time, often testing and revising the script to accommodate new findings or to optimize previous parts of the script.

Final Script Explanation

Just to be clear, this comes with no guarantee; run at your own risk. I tested it briefly on one image only.

The finalized batch script works as follows:

@echo off
setlocal enabledelayedexpansion

:: Check if a filename has been provided as an argument to the script
if "%~1"=="" (
    echo Usage: %0 filename.jpg  :: Display usage if no argument is provided
    exit /b  :: Exit the script if no filename is provided
)

:: Set variables for source and output filenames
set "source=%~1"  :: Source image path (input)
set "output=%~dpn1_EXIF_ADDED%~x1"  :: Output image path with '_EXIF_ADDED' appended to the filename

:: Extract EXIF data and calculate font size based on image width
for /f "tokens=*" %%i in ('magick identify -format "width=%%[fx:w]\nheight=%%[fx:h]\nmodel=%%[exif:Model]\ndatetime=%%[exif:DateTimeOriginal]\nexposure=%%[exif:ExposureTime]\niso=%%[exif:PhotographicSensitivity]\nlens=%%[exif:LensModel]\npointsize=%%[fx:w/30]" "%source%"') do (
    set "line=%%i"  :: Read output line by line into variable 'line'
    set "!line!"  :: Set environment variables dynamically based on the output of the 'magick identify' command
)

:: Set default values for any missing EXIF data to prevent errors
if not defined model set "model=Unknown"
if not defined datetime set "datetime=Unknown"
if not defined exposure set "exposure=Unknown"
if not defined iso set "iso=Unknown"
if not defined lens set "lens=Unknown"
if not defined pointsize set "pointsize=40"  :: Set a default point size if not calculated

:: Annotate the image with the extracted and calculated EXIF data
magick "%source%" -gravity southeast -pointsize !pointsize! -fill white -annotate +10+10 "Model: !model!\nDate: !datetime!\nExposure: !exposure!\nISO: !iso!\nLens: !lens!" "%output%"

:: Display the path of the processed file
echo Processed: "%output%"
endlocal  :: End local scope for variables

Here is a sample showing the source image on the left, the command run on the bottom; and the output on the left:

( https://adrianhensler.com/wp-content/uploads/2024/04/batch_file_example-1.png )

Future Enhancements

The session highlighted potential improvements and enhancements, such as:

Using ExifTool: For a more robust and uniform extraction of EXIF data, integrating ExifTool could handle metadata more effectively. My understanding (I have not yet tested) is that ExifTool will better handle the differences. I do want to dig more into EXIF and why it is not standardized (but not a priority now).
User Control Flags: Adding flags to control aspects like text display, color, and whether to output text on a transparent background (for use as an overlay layer) could provide greater flexibility and customization for users.

  • Dynamic Text Resizing and Placement: Further refining the script to dynamically adjust text size and placement based on image resolution and content could enhance legibility and aesthetics.
  • Interactive Mode: Incorporating an interactive mode where users can select which EXIF data to display and preview the output before final processing might enhance usability.
  • Support for Multiple Images: Extending the script to handle multiple images in a batch process could save even more time, making it suitable for professional photographers or anyone dealing with large sets of images.

Conclusion

The interactive and iterative session with ChatGPT helped to develop a practical solution that automates the embedding of EXIF data onto images—a task that can be tedious if done manually. The final batch script provides a foundational tool that can be built upon with additional features and refinements.

This process underscored the importance of adapting tools to fit specific needs and highlighted how automation can significantly streamline workflows in photography and digital image processing. Moreover, it demonstrated the potential for AI-assisted coding sessions to facilitate rapid development and problem-solving.

The potential for future enhancements also points to an ongoing journey of improvement and adaptation, making the tool ever more robust and user-friendly. This blog post not only shares a useful script but also reflects the collaborative problem-solving process that can be applied to similar challenges in other fields.

Sharing and Feedback

Feel free to adapt the script to your needs and share your modifications. I am interested in hearing how others might improve or alter it for different applications. Feedback and shared experiences can lead to even better solutions, benefiting a wider community of users.