Dall-E 2 is a new AI based toolkit that lets one make images which may not exist in real datasets on which this model was trained. It produces images given a text fragment or text description. This means given a statement, for example, “candles in air”, the AI system generates four or so images which best represents the text description
Suggestion: The team can provide a proof that the testing image were not in database, they should do it for several images on a random sample and this proves that most output made is not a copy.
For example for the text expression, “candles in air”. I got the following three images. These images show that the image was drawn by the system part by partnand is not a cut and copy thing. It does not copy the images from the original images. The following example shows how accurately the system tries to draw it.
In the following the system actually draws “candles in sky” with metallic looking candles with artificial flames.

Another image that was generated was the following, this seem to be completely system generated image.

Next one generated is given below, this seem to be system generated image from scratch.

Note none of these seem to be copying from a source.
While another entry I tested was “diyas in sky”, the following are the images generated from the product. These images seem to be realistic, but if you look closer one sees they are generated pixel by pixel, as well.
More on this toolkit I shall cover in coming articles.



In conclusion these images seem perfect and look like it must be there in original dataset on which the system was trained. In this case the team working on this product can provide a proof that this image was not in database, they should do it for several images on a random sample and this proves that most art generated work is not copy from original images. In case they go have provided proofs, do share me the link as well.