Jump to content
Free downloads from TNA ×
The Great War (1914-1918) Forum

Remembered Today:

Socks, Sütterlin, & Other Musings

  • entries
    14
  • comments
    105
  • views
    2,226

What's wrong with this picture?


knittinganddeath

817 views

If you were paying close attention to the Soldiers and Their Units subforum last week, you may have been present for the excitement of this thread. A spammer posted a picture of a character from a video game set during the Great War and asked for help identifying "his relative." The ruse was discovered soon enough and the thread locked. However, it got me thinking: could AI be used to create fake historical documents, specifically photographs? I spent the last three days figuring out the basics, and the short answer, as you may have guessed, is yes. If you want the long answer, keep reading.

While the generation of AI text and art may seem like magic, the latter in particular requires a surprising amount of skill. Nevertheless, even a beginner can produce something. Whether that something corresponds to the image that you had in your head is another question altogether. In any case, you begin by providing the AI engine with a written prompt that includes the characteristics of everything that you want the image to contain. (It doesn't always work; prompt-writing is an art unto itself.) The engine can then generate an image from these words alone; alternately, an existing image can be used in tandem with the written prompt.

For my first attempt, I chose a photograph of Manfred von Richthofen. My inexpertly written prompt ran as follows: “portrait of manfred von richthofen, german pilot, wwi, ww1, prussian junker, 30yo, tall coat collar, german military cap, iron cross, double-breasted coat.” Iteratively refining the prompt produced the following series of images.

1529768225_Screenshot2023-03-13at11_54_50.png.7194af38ba5a17f93f0fd47bb04a719d.pngGerman pilot Manfred von Richthofen, left, decorated with the Pour le Mérite and Iron Cross; on the right are 6 AI-generated portraits based on the original image

To the unpracticed eye, these photos may pass first muster. While these men are clearly not Manfred von Richthofen, they might well be nother World War I-era German pilots...or not.

Closer scrutiny reveals that these photos are fakes, and not very good ones at that. Most obviously, the AI struggled to recreate the jacket. It’s quite an accomplishment for our poor man to have done up the nonsensically-placed buttons. He also faced some challenges when it came to displaying his medals. Although he’s clearly decorated, the shapes and sizes of those medals have no basis in reality. The AI also insisted on creating a cap badge. Germans, however, did not use cap badges. In short, there is no way that these photos show a German pilot.

To say that I was disappointed with the AI’s vision of Manfred von Richthofen would be an understatement. Perhaps, I thought, a more generic subject would improve the results. My new goal was to generate a photograph of German soldiers in the trenches. You be the judge:

1796136052_Screenshot2023-03-13at22_13_34.png.38e7505a995b6ad779cb2c295f566de4.png

Don’t look too closely at the faces. Maybe this isn’t a trench, but the Uncanny Valley.

Again we see that the AI tends strongly towards images of British soldiers; the helmets—though they may not be perfect likenesses of British Brodie helmets from that time—are certainly not the Stahlhelm or Pickelhaube that I specified in my prompts.

Another iteration produced helmets that at least look Stahlhelm-ish. However, the matte, blank quality of the helmets lends a feeling of wrongness to the photo. Nor does the rest of the uniform stand up to closer scrutiny. Not to mention that the AI took the “trench” part of the prompt very literally and placed the men in a ditch—which seems too densely populated for the scene to be real.

720683820_Screenshot2023-03-13at22_12_23.png.cb76606c78f4debfb04a67fde4af2f2c.png

Other creators suffered from many of the same problems as I did. As you can see below, these soldiers have the same non-committal Stahlhelm design made of an imaginary matte material.

crazylarry.jpg.57dd8bcb72fc89e32735f2953cc8cdca.jpg

“A group of WWI soldiers in a trench” by crazylarry

Meanwhile, the men in this formal group photo look as if they were photographed separately and then photoshopped into the same frame. Their questionable hats and facial expressions notwithstanding, the utter lack of emotional connection between them suggests that the photo is, in the very least, a composite if not a fake.

1120516004_Screenshot2023-03-13at22_10_24.png.f72fe5ec4b31239af6b8976d3960e266.png

“An old image of WWI soldiers” by apelah1881.

Given that I doubted anyone would be taken in by photos with such obvious shortcomings, I decided to train my own AI model to (hopefully) produce some less-obvious fakes. Fifteen half-length portrait photographs of German soldiers provided the training set.

In the first iteration of my model, I made several mistakes. In retrospect, I should not have included photos of several soldiers who wore glasses with frameless lenses. The AI did not understand how to interpret this style of glasses, with the result that many pictures featured men with distorted or sunken eyes.

8DAC991F-3556-4778-A61A-39118A1045C1_4_5005_c.jpeg.9292693f105341df77bc1bde1b2e6c23.jpeg

Something’s very wrong with this soldier…did the photographer just capture a particularly weird moment or is AI afoot?

The original training set contained photos of men who wore field caps and Pickelhaube. The AI then attempted to combine these styles of headgear. While the results were not as bad as one might imagine, they were not good, either. Therefore, for the second iteration of the model, I used only photos of soldiers with Pickelhaube, with better results.

Yet the AI still struggled to generate decent human faces. For the most part, I managed to fix the zombie eyes and crooked mouths with a post-processing technique called upscaling. (There are also specialised face-processing engines that I haven’t experimented with yet.) However, upscaling is not without its own issues. Although it gave faces to men who previously didn’t really have one, it also didn’t know what to make of the spikes on the Pickelhaube and so turned them into devices that vaguely resembled radio antennas.

beforeafter.jpg.53fe23cfff1f1c679a0af2b8cbfef533.jpg

Top row: original images. Bottom row: upscaled images. In the latter, the improvement in faces is clearly noticeable.

A general problem with AI art engines is that they have been overtrained on female figures. As a result, its impulse is to transform men into women. Even though my model was finetuned with photos of male soldiers, it was still built on top of an original in which the training data overemphasised women. As such, the new model occasionally tried to turn young men into young women by softening their facial features and cinching their waists in an attempt to create hourglass figures.

1544887466_Screenshot2023-03-13at15_33_45.png.e1188e46cf9fc7e0d79e6c277ef76a5b.png

Left: Viktoria Savs, a woman who served in the Austro-Hungarian army under the male pseudonym Viktor Savs. Right: an AI-generated “German soldier” who may or may not be a woman.

The uniforms were also a mess. I'm no uniforms expert, but even I could detect clear signs that something was amiss: just as when it produced the pictures of Manfred von Richthofen, the AI struggled with button placement and sometimes treated the Pickelhaube’s spike as unrelated to the rest of the helmet. While I know nothing about guns or bayonets, I’m pretty sure that the AI is allowing itself some artistic license there too.

Although the pictures that feature in this article may not be particularly impressive to an expert, I believe that the general public would accept the better ones at face value. At the time of this writing, a total beginner cannot produce consistently convincing results using AI. However, better prompts, post-processing, and judicious human use of other AI tools & techniques would allow the pictures to withstand a higher level of scrutiny. With time and effort, I don't doubt that it would be possible to create plausible scenes and soldiers from an AI version of the Great War that never happened.

Edited by knittinganddeath

17 Comments


Recommended Comments

tankengine888

Posted (edited)

This is quite interesting..

AI has taken a big leap over the years, you have websites like ChatGPT which can generate answers/text for you. Say you wanted a biography of Sir Douglas Haig's Military endeavours, it would pull that up without difficulty... it can also do an essay for you.

I wonder at what point will images be faked to more sub-par standard, or above average even. It won't take long, though [speaking as someone learning to script], it would be a pain to get it accurately. Nonetheless, it's still astonishing how they can (nearly) properly replicate an image to turn it into a fake.

Still, an interesting post, thank you!

Zidane.

Edited by tankengine888
Link to comment
knittinganddeath

Posted

@tankengine888 Thanks for your thoughts. I admit that your friend's post made me chuckle when the truth was revealed. I did wonder how long the thread could have gone on for if the photo had been less recognisable and accompanied by a better personal history of the supposed soldier -- possible names, birthplace, birth year, siblings' names, etc.

My husband--a tech blogger and computer scientist--thinks that ChatGPT's human language skills are more of a party trick but has used it to generate code, which can be quite good. We both find its propensity to lie and invent sources concerning, especially as it does so with such utter self-assurance. There was recently an article in the news here about university students putting in library requests for textbooks that ChatGPT had recommended, only for librarians to discover that these books and authors do not exist. (It also told me that my dad was a social media influencer with 1 million followers on Instagram, and that he was a woman. That gave us a good laugh, but I wouldn't want someone who's thinking about doing business with my dad's company to get the same answer.)

Link to comment

Knittinganddeath

After reading your post

I will be treating as suspicious, any posts which you may make on the 1st April

 :rolleyes:

 

 

 

Ray

Link to comment

I'd be bl**dy well careful on all posts on April 1st!

 

Yes, if that thread had gone on, I would've been hysterical!

Indeed, chatGPT has its faults.. but advantages too 

Zidane.

Link to comment
charlie962

Posted (edited)

One or two recent replies* to other threads on the forum have had that hint of 'ChatGPT'erie. Am I paranoid? 

* No names, no packdrill.

Edited by charlie962
Link to comment

How long will it be before most of the replies on this forum are sourced via AI ? 

Even the thread WIT ? 

Gloom.

Link to comment
knittinganddeath

Posted

2 hours ago, charlie962 said:

How long will it be before most of the replies on this forum are sourced via AI ? 

Even the thread WIT ? 

Gloom.

Interesting question -- my knee-jerk reaction is to say that it won't be anytime soon because a lot of the information on this forum is very niche and also very often based on document interpretation (photos, census returns, MIC, etc), but I may go experiment with ChatGPT later today.

14 hours ago, RaySearching said:

I will be treating as suspicious, any posts which you may make on the 1st April

 :rolleyes:

I don't blame you! Trust no one!

Link to comment
Matlock1418

Posted (edited)

A very interesting and scary thread. 

"Thanks" and "I wish you hadn't blogged/I wish I hadn't read it" in equal measure = My head in the sand perhaps. ??

For GWF posts and the info supplied within I always appreciate the sourcing of entries alluding to be 'facts' so as to allow for cross-checking [always assuming the sources aren't made up and a further cause of members' and librarians' confusion and frustration!]

As for 1 April posts ... Forewarned is forearmed!!!

For education - Are AI essays going to be marked by AI?  How will humanity ever learn anything more? 

Got to admit I am very old-school and way behind the curve when it comes to IT and thus am rather disturbed by such AI and the potential for the re-writing of history. Even without AI there seems far too much of that going on at present with a seeming denial and/or rewriting by humans of so much of what has actually gone before [with so much past, present and probably future grief resulting].  Is history a thing of the past?

For science and medical matters etc. the jury is out for me.

Can Azimov's Three Laws of Robotics work?

Is humanity moving towards slavery to IT and AI?  Or is humanity even more quickly heading for the waste bin as just another transient and reworkable blip in history?

M

Edited by Matlock1418
Link to comment

 

On 17/03/2023 at 04:05, Matlock1418 said:

For education - Are AI essays going to be marked by AI?  How will humanity ever learn anything more? 

 

As a college instructor who has traditionally used various forms of writing assignments - ChatGPT has become a real challenge in the past couple of months.

I have already had a number of assignments submitted that were AI generated.

To be honest I am not sure what I am going to do - -perhaps reintroduce oral exams as a key component of a grade -- which I am sure will go down well.

Chris

Link to comment

I tried the AI version on 'Bing' asking it to search for my particualr WW1 interest. I entered 'Mars Offensive 28th March 1918' hoping it would find my post and perhaps some other useful information.

It did find my post and also a LLT entry. However the first result was for a WW2 Mars attack by the Germans on the Eastern front in 1942.  A long list of further results were not relavent.

I think it is still at 'infant school'.

Bob

Link to comment
7 hours ago, 4thGordons said:

I have already had a number of assignments submitted that were AI generated.

My Computer teacher taught us ChatGPT and, if worded rightly, could 'make a dangerous device'.. But this also led to the folks in my year-level [and indeed school!] using ChatGPT for writing their own essays on say, MacBeth.

I mean, points for ingenuity and thinking outside the box, but they still need to learn something! As one of my other teachers said, 'if you copy a piece off ChatGPT and you re-write it in your own words, then that'll suit me'

Link to comment
knittinganddeath

Posted

On 25/03/2023 at 18:36, 4thGordons said:

To be honest I am not sure what I am going to do - -perhaps reintroduce oral exams as a key component of a grade -- which I am sure will go down well.

One of my former classmates is a lecturer in French at a uni in the United States. Because of Google Translate, she had to rethink her grading system -- she was getting first-years turning in papers that looked like they were written by PhDs, but the same students were failing basic grammar and oral exams. Now she has all writing assignments done in class and she weights oral exams (and maybe quizzes?) much more heavily. 

The IB have talked about re-focusing their curriculum as well to emphasise critical thinking rather than essay-writing. I suspect the organisation may also choose to give oral presentations more weight for grading.

By the way, I'm very curious -- how did you figure out which assignments were written by ChatGPT?

 

21 hours ago, RobertBr said:

A long list of further results were not relavent.

I think it is still at 'infant school'.

I had the same experience as you. My husband said that if I set out to try to make it fail, then I was going to succeed, and that I should look at it as a glass half full rather than a glass half empty. Admittedly, that's not my strong suit.

Link to comment
52 minutes ago, knittinganddeath said:

By the way, I'm very curious -- how did you figure out which assignments were written by ChatGPT?

Well in part there was just instinct - as you mentioned with your friend, the work was just not consistent with what I knew about the students and their other assignments. Then, pondering it I realized there were NO errors at all (no small typos, no incorrect capitalization etc) -- also it some used references with which I was familiar but I would be very surprised if a first year undergrad was.... and then, when my antennae were quivering - I ran it through "ZeroGpt" detection software (which gave certainties of AI generation between 97 and 100%)

It is very disheartening to be honest because while some were on an exam - it was an open book, use whatever you like just make sure you cite situation. The other was a minor assignment responding to a current article and to be honest it seems to me like it was MORE work to do chatgpt route than it would have been to just do it!

There are even more recent developments (currently not free) which will take the ChatGPT produced material and make it virtually detection proof (or such is the claim)

The other thing is if you get more sophisticated with your requests to AI then the responses get harder to detect (for example adding in more parameters including level of writing etc) - its a bit like library searches, if you understand how the system works and how to combine search terms and wildcards you can get much better results than simplistic keyword searches.

As I finish up my 25th year of full time higher-ed teaching (plus 5 as a graduate student) I am wondering if I have the energy to deal with this or indeed if I should bother! Perhaps I should see how AI does at writing comments on essays :)

 

Chris

Link to comment

My daughter is living in a town called Badalona in Catalonia and misses our dog. My first experiment was asking Bing image AI to show a black cockapoo dog on Badalona Beach. The result blew my mind.

Bing came up with this:

20230416_130729.jpg.e225d58873a2ef1ad5d47b48df9e9d44.jpg

This is our actual dog

20221124_105711.jpg.56a3cdbd9abcf11f3322c4c2c6688541.jpg

This is the actual beach

20230420_215146.jpg.4b8c2c0c8748b7dc87d0345b0f276791.jpg

Link to comment

That is uncanny.

Most of my spam email is sent from alphanumeric addresses and tout US products (repetitively so) that  have no relevance to me, but I did wonder when they started to include offers of tests and treatments for a medical condition that I'd just been diagnosed as having. But then at the same time the condition cropped up in a number of books I was reading and films that I watched.  (My headmaster often assured us that there was no such thing as coincidence but never explained how else to describe it or account for it).

Link to comment

"My headmaster often assured us that there was no such thing as coincidence but never explained how else to describe it or account for it)."

 

Could it be a mix of probability and RAS (reticular activating system) ?

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...