MediaPro refugee looking for a new home for my DAM

The XMP standard is pretty clear; embed metadata in JPG, TIF, PSD, PSB, DNG, ... Write as sidecar for RAW files. Your best future proof approach is to follow this standard. Do not write embedded for RAW files.
when writing embedded metadata then that doesnt need re-compressing the image data.
 
Thumb display may not be the fastest in Photo Supreme, but here a screen is filled with thumbs in a fraction of a second.
You may want to report this. A fix for issues is typically released quickly.
 
Last edited:
Before I proceed, keep in mind that I'm on shaky ground with respect to my understanding of the in's and out's of metadata storage. So my understanding is, and feel free to (nicely please) tell me if I'm wrong, that IPTC defines the field names of the metadata (and maybe describing the data that's expected like length? IDK) so that portability of metadata can happen. XMP describes the format of any data describing a digital asset whether it is standard metadata like IPTC or customized data like the "recipe" to do the recreation of a non-destructive edit process. Is that correct?
IPTC is old, and yeah, it's basically about standards for press photographers. See https://iptc.org/standards/photo-metadata/

And how different software implements it: https://iptc.org/standards/photo-metadata/software-support/

XMP is an Adobe standard that is a way of writing it for storage, basically. It's actually sort of fun to look at it; see https://wwwimages2.adobe.com/conten... Release cc-2016-08/XMPSpecificationPart1.pdf. Or just open a XMP sidecar in a text editor. It actually makes sense just looking at it. And yeah, there are standards for what you include like numerals for time and so on.
I've been looking at a lot of different packages in the last few weeks so I apologize if I get some of these comments attached to the wrong package.

It seems like, again only for JPEG's, that some software embeds the IPTC metadata in the media file and others write it to both the media and an XMP sidecar and others write it only to an XMP or their own proprietary sidecar (ACDsee, C1, etc.).

1) Is there any collective wisdom about which method (embedded vs sidecar vs combination) is better or is it a preference so to each his own?
There are pros and cons, but those are usually secondary to whether say you save your raw files as DNG, etc. It's fairly uncommon to use sidecars for files where the metadata can be embedded in the file like JPEG; usually it's done for internal use. Lots of applications would just ignore a JPEG sidecar. But Mylio say can use the sidecars to send info back and forth without sending the whole file. In short, sidecars are usually used for proprietary raw files like ORF or NEF or ARW but not with DNG, or JPEG.
My gut reaction is to embed it so it doesn't get lost but I can see the wisdom of making it in a sidecar so the original file doesn't get potentially messed up. I'm concerned that over time the sidecars could get separated from the media file. I'm also concerned that if it is in a sidecar and you share the photo with someone who isn't a photography person it seems like sending them the .xmp also wouldn't do any good since they aren't likely to have software that knows what to do with it. So it seems like at some point you will likely embed it in the JPEG anyway (or at least the ones you share), right?
Generally one isn't sending raw files to people who don't know what they are. So, again, the convention is that usually the JPEG or HEIC is going to have the metadata in it. Another photographer seeing a NEF might say, "hey, where's the sidecar?"
2) If the companies write to an XMP sidecar (so literally a .xmp file) is that more portable than if they write to their own proprietary sidecar (like .on1 if I recall)? I realize that given the nature of the extensibility of XML they can write a lot of different types of custom data to go in there but I'm specifically talking about the IPTC metadata. So I guess a shorter way to ask it is ... do many of the photo software we've been talking about know to look for .XMP files but wouldn't necessarily look for a .on1 file or whatever extension C1 puts on theirs so by that nature it would make it more portable if it writes it to .XMP?
Look inside a few. I haven't looked at any sidecars made by On1. I did use some made by Mylio, and in general anything that was specific to Mylio only it could read; other applications like Lr would just ignore it. Sort of like Maker Notes in exif; info only of use to the camera manufacturer at times and not used by anyone else. Ditto for say the adjustment info say Lr would store; most other software wouldn't know what to do with an Adobe adjustment and ignores it. But it could find and use the IPTC description or instructions.
I'm just trying to get a handle on, maybe a little late, figuring out how I want my DAM to store the metadata since most (well a lot at least) of them have options.

Changing gears a bit more ... photo edits being stored in sidecars ...

As I think I understand it, to varying degrees of success, some edits that are created by package A can be stored in a sidecar that package B can make a reasonable attempt at recreating (YMMV).
I think that's mostly true only with Adobe since it's the big gun; DarkTable or DigiKam IIRC could read some Adobe edits. Mylio too, although not sure if that's true now. And PM can read some, like maybe just crop.
Since it would be a much bigger deal to me if I lost my metadata vs. losing my edits is it a) possible, or b) advisable to store the IPTC metadata embedded in the JPEG's but the edits (and maybe the IPTC data also) and whatever else the developer wants in the sidecar? I realize it would vary by package, but would that be a worthy goal to shoot for as I'm evaluating things and figuring out how I want to set them up to work, if I have options.
I wouldn't have JPEGs with sidecars. If I did I'd do it like RAW Power, which I believe stores them completely separately. Which, if one thinks about it, isn't really much different than storing that same data in a database.
Lastly, and this may be a stupid question ... when metadata is embedded in the JPEG, is that a lossless process? I'd guess that info goes in the "header" of the file so I wouldn't think it would have to manipulate the image data but that's a total guess. This might be an "it depends on the developer" answer, IDK.
It shouldn't affect the image. But writing a file is writing a file, so there's that. Elsewhere in this thread there's a link to how that goes awry with some raw files, as Photo Mechanic users found. I've never had problems with JPEGs, but again, any time you write a file something can go wrong, right? ;-)
Thanks for reading another long question from me. I so appreciate hearing your opinions on these questions. Thank you for your time.
 
IPTC is an organization and is still very alive with XMP that link you shared says how good software is supporting their IPTC (for XMP) standard. I noticed that Photo Supreme is among the best and is listed as the only one supporting IPTC regions I’m not surprised. Looking at that software list I am surprised how poor PM6 supports IPTC, considering that press photographers are their market

It’s more stringent than that: by standard, it is NOT allowed to write sidecars for JPG and it is NOT allowed to write embedded for RAW. Hence why Lr won’t read XMP from a sidecar for a JPG. They follow the standard, where some other software take the standard with a grain f salt. This is also why I recommend not to write embedded metadata from RAW. You future tool just might not read it, as the standard doesn’t allow it.
 
Last edited:
Everyone else appears to have covered it, and I'd agree - always write metadata to JPEGs, because every app can read/write IPTC tags into Exif, but not everything can (or does) read XMP sidecars.

That said, it's not a perfect world - for example, Amazon Prime Photos and Google Photos do not read IPTC data, or index their keywords as part of their search (which is quite annoying).

Writing IPTC tags to JPEG will be lossless - as you mention it's the 'header' so shouldn't require re-encoding the image. Be careful though, I have come across some utilities which just rewrite the whole JPEG - including re-encoding - rather than updating the EXIF data. Always check a new tool by saving a large file with updated IPTC data several times, and seeing if the size decreases or artifacts appear.
 
IPTC is an organization and is still very alive with XMP that link you shared says how good software is supporting their IPTC (for XMP) standard. I noticed that Photo Supreme is among the best and is listed as the only one supporting IPTC regions I’m not surprised. Looking at that software list I am surprised how poor PM6 supports IPTC, considering that press photographers are their market
The list is pretty old for some software (like 2017 for Adobe), but more recent for others like PM and Photo Supreme. I would guess that since PM doesn't do facial recognition that's why no regions. Lr does them, but again they haven't updated their list there since 2017 I guess (oddly). Is there a detailed list for each software program? I only saw the summary one I linked above.

Worth noting, I suppose, is that PM does support hierarchical keywords, which don't have a standard at IPTC. IPTC has started one, but I think everyone ignores it, and the only programs that use hierarchies seem to be adopting Adobe's solution (the “lr:HierarchicalSubject” tag). PM tends to focus pretty tightly on existing users, and I doubt many are doing person IDs by space in the field, vs using say team rosters for captions, titles, and keywords.
It’s more stringent than that: by standard, it is NOT allowed to write sidecars for JPG and it is NOT allowed to write embedded for RAW. Hence why Lr won’t read XMP from a sidecar for a JPG. They follow the standard, where some other software take the standard with a grain f salt. This is also why I recommend not to write embedded metadata from RAW. You future tool just might not read it, as the standard doesn’t allow it.
Thanks; I didn't realize it was that official, as it were. But it certainly makes sense.
 
I agree though that in general some of the support from these individuals/small outfits can be excellent (it can also be awful). Thorsten Lemke and David Nanian, and Jerry Krinock (Bookmacster) have all responded in exemplary fashion to any queries I've contacted them with.
I'm sorry that I neglected to mention Jerry Krinock. . . He is currently helping me with problems caused by recent changes to the Brave browser that have broken Bookmacster compatibility. Ironically, I discovered Bookmacster when URLM Pro was winding down due to its sole developer's health issues.

I really do understand the concern about small shops even though I can count the times I've been let-down by same on the fingers of one hand. . . With several fingers unused. ;-)
I just want to post an update regarding the BookMacster problem. . . Jerry just posted an update that allows me to save bookmarks again. REALLY fast turnaround once the issues were identified.

Brave is my default web browser but its development certainly has had its problems. A user forum system is not my favorite way of providing support. 1Password also relies on its official and Reddit user forums, with similar lopsided results as Brave. . .
 
Hi:

I am in exactly the same situation as Shutter-bu. And I am unhappy so far with my research results.

I tried NeoFinder, but I really struggle with the last century UI. Also, my demo version does not build thumbnails for some reason. I have not ruled it out at this stage, but I am not convinced either.

I invested some time testing Photo Supreme including the Media Pro transition (which takes time for large catalogs, but otherwise works ok). The functionality of Photo Supreme is quite impressive. My biggest concern at this stage is performance. When I scroll though a large number of thumbnails, the software needs 1 or 2 seconds for each page until all thumbnails are there. I am not sure, if this is an extreme use case, but I regularly look through a large catalog and scroll through hundreds of pictures. This is easily possible with Media Pro and with Apple's Photo App, but not with Photo Supreme. For me this is a no-go. Does anyone have similar experience of the product or is it just me?
Oh that's not good to hear. I've just been playing with a small sample set so far so I haven't seen any performance issues. Hopefully yours is an isolated and fixable problem. I've got 300k+ images that I was planning on putting in one, or maybe two catalogs so if your performance is typical that's going to be a no-go for me as well.

Hopefully someone here has as large collection and can comment.
Every time, I work with Media Pro, I am sad, because it is such a great piece of software for my purposes and would only need a little refresher for geo tagging and face recognition.
I feel exactly the same way. I could even deal with no geo-tagging and face recognition if it would just go to 64-bit. I'm not a programmer but it doesn't seem like that would be a big deal to do but IDK.

It's not fancy but it does it's job and it does it well. I'd call it a simple, yet elegant work horse. There are of course small things here and there that don't work right or I wish would work differently but all in all it's relatively perfect for my needs ... and it's over 15 years old!!! I'm so irritated with iView for selling out to Microsoft, Microsoft for not really doing anything with it and Phase One for abandoning it. I am grateful to Phase One for at least enabling it for this long. Otherwise I would have had to abandon it a long time ago when it had the 2GB catalog size limit. That was starting to bite me before they fixed that part at least.

I've been spending a good chunk of time on it in the last few days working on picking pictures for an end of year video and it's just so easy to do. Photo Supreme seems to have been cut from the same cloth so that's why I've been holding out hope that it will fit my needs as well as Media Pro does.
My 2 cents.
Try reducing the image preview size. My 100,000+ file catalog, set with a 640 preview size seems fine.

Another suggestion is to re read the Quick start Manual about once a week when you are beginning your PSU learning curve. It is packed with an amazing amount of information in such a small package.

And finally, don't hesitate to ask Hert for help. He is very capable, responsive and helpful . . . a rare commodity these day!

Good luck,

Jack
 
IPTC is an organization and is still very alive with XMP that link you shared says how good software is supporting their IPTC (for XMP) standard. I noticed that Photo Supreme is among the best and is listed as the only one supporting IPTC regions I’m not surprised. Looking at that software list I am surprised how poor PM6 supports IPTC, considering that press photographers are their market

It’s more stringent than that: by standard, it is NOT allowed to write sidecars for JPG and it is NOT allowed to write embedded for RAW. Hence why Lr won’t read XMP from a sidecar for a JPG. They follow the standard, where some other software take the standard with a grain f salt. This is also why I recommend not to write embedded metadata from RAW. You future tool just might not read it, as the standard doesn’t allow it.
Thanks for the clarification. I wish it would have been written that simply other places I've been looking.

I was confused by your comment about Lr at first though because all my JPEG's end up with .xmp sidecars after sending them through Lr. Upon further investigation though, I apparently had ticked the box for "automatically write changes into XMP" somewhere along the way. Interestingly though when I uncheck it, it says "Warning; Changes made in Lightroom will not automatically be visible in other applications unless written to XMP" Both of those statements though are a little vague though because it could meet writing XMP format embedded or it could mean an XMP sidecar. A post from Scott Kelby though removed the vagueness but confused the situation for me:


In the post he says that setting refers to writing it to sidecars. However, why would it warn me that it wouldn't be visible to other applications unless it's in a sidecar ... unless it is only talking about raw. If it's in JPEG (or the other formats including .psd) it should be written as embedded data so it should be visible to other applications ... I would think.

I tested it and a file with no additional IPTC metadata except the camera Exif (as verified with Graphic Converter and Phil Harvey's awesome ExifTool) and if I add some keywords and export the file out of Lr, it does have all the IPTC & XMP data embedded in the file. So, I can only assume that warning is talking only about RAW files. It should say that though IMHO.

I'm finding the documentation in most of the software I've looked at to be vague in this matter since the situations are different for JPEG, PSD,etc. vs. RAW. Most of them don't clarify which they are talking about or they specify in one section, but then make other comments in other parts that are vague. So it will make it sound like it's going to do the wrong thing (like add a sidecar if it's a non-RAW file) but they are apparently assuming you are only dealing with RAW files. I find that an odd choice b/c a lot of the times that you are working with getting the metadata out of the software would be doing an export and I would imagine that a lot of the times that people would export to JPEG. Maybe not though.

So it sounds like for all of my non-RAW files, I can get rid of all of the .XMP files since they only reason they were created was due to a mistaken preference set-up.

Thanks to all of you for the all the clarifications on how this work. I've known for a long time to always make sure annotations can be written out in IPTC compatible format. I just didn't know the idiosyncrasies of embedded vs not and how XMP factored in.

Y'all have been an awesome source of information and advice for all of this. I can't thank you enough.
 
I invested some time testing Photo Supreme including the Media Pro transition (which takes time for large catalogs, but otherwise works ok). The functionality of Photo Supreme is quite impressive. My biggest concern at this stage is performance. When I scroll though a large number of thumbnails, the software needs 1 or 2 seconds for each page until all thumbnails are there. I am not sure, if this is an extreme use case, but I regularly look through a large catalog and scroll through hundreds of pictures. This is easily possible with Media Pro and with Apple's Photo App, but not with Photo Supreme. For me this is a no-go. Does anyone have similar experience of the product or is it just me?
Oh that's not good to hear. I've just been playing with a small sample set so far so I haven't seen any performance issues. Hopefully yours is an isolated and fixable problem. I've got 300k+ images that I was planning on putting in one, or maybe two catalogs so if your performance is typical that's going to be a no-go for me as well.

Hopefully someone here has as large collection and can comment.
Every time, I work with Media Pro, I am sad, because it is such a great piece of software for my purposes and would only need a little refresher for geo tagging and face recognition.
I feel exactly the same way. I could even deal with no geo-tagging and face recognition if it would just go to 64-bit. I'm not a programmer but it doesn't seem like that would be a big deal to do but IDK.
I don't know any DAM solution on Apple Hardware, where scrolling through large collections of thumbnails (in the ten's of thousands) is as fast as Apple Photos or Media Pro. (As a remark, that's why I'm using Capture One in sessions mode and import the developed jpegs into Apple Photos, for getting an overview of my work and distributing some of them to my idevices for showing to e.g. friends of mine. Media Pro isn't working on my OS version anymore. I'm just a hobbyist).

And as someone working in the software industry for decades, I can tell you, that the effort for migrating software from 32 to 64 bits can be anything between just recompiling (i.e. a couple of hours) and coming to the knowledge, that starting from scratch will be cheaper and faster.

I assume, that in the case of Media Pro, it was the latter. May be, it was as fast as it was, because of not (!) using all the recommended APIs of macOS. And now, they either have to use the APIs, or start writing their own from scratch in a 64 bit world...

If it were simple to port Media Pro, they could have sold it to someone else...

When I retire in a couple of years, I will write my own DAM, server based, multi user, fast as hell, with all fancy AI stuff, ... Just kidding

--peter
 
I invested some time testing Photo Supreme including the Media Pro transition (which takes time for large catalogs, but otherwise works ok). The functionality of Photo Supreme is quite impressive. My biggest concern at this stage is performance. When I scroll though a large number of thumbnails, the software needs 1 or 2 seconds for each page until all thumbnails are there. I am not sure, if this is an extreme use case, but I regularly look through a large catalog and scroll through hundreds of pictures. This is easily possible with Media Pro and with Apple's Photo App, but not with Photo Supreme. For me this is a no-go. Does anyone have similar experience of the product or is it just me?
Oh that's not good to hear. I've just been playing with a small sample set so far so I haven't seen any performance issues. Hopefully yours is an isolated and fixable problem. I've got 300k+ images that I was planning on putting in one, or maybe two catalogs so if your performance is typical that's going to be a no-go for me as well.

Hopefully someone here has as large collection and can comment.
Every time, I work with Media Pro, I am sad, because it is such a great piece of software for my purposes and would only need a little refresher for geo tagging and face recognition.
I feel exactly the same way. I could even deal with no geo-tagging and face recognition if it would just go to 64-bit. I'm not a programmer but it doesn't seem like that would be a big deal to do but IDK.
I don't know any DAM solution on Apple Hardware, where scrolling through large collections of thumbnails (in the ten's of thousands) is as fast as Apple Photos or Media Pro. (As a remark, that's why I'm using Capture One in sessions mode and import the developed jpegs into Apple Photos, for getting an overview of my work and distributing some of them to my idevices for showing to e.g. friends of mine. Media Pro isn't working on my OS version anymore. I'm just a hobbyist).
That's interesting that you are using C1 in sessions with a large # of photos. I've been going back and forth with C1 support asking lots of questions and the support guy there said: "There's no issue with handling large numbers of images, the only thing to make it more stable and accessible is to use the referenced workflow for catalogs (note that sessions are not a good option for so many images)."

So did you choose sessions vs catalogs because it seemed faster or did it just fit your workflow better?
And as someone working in the software industry for decades, I can tell you, that the effort for migrating software from 32 to 64 bits can be anything between just recompiling (i.e. a couple of hours) and coming to the knowledge, that starting from scratch will be cheaper and faster.

I assume, that in the case of Media Pro, it was the latter. May be, it was as fast as it was, because of not (!) using all the recommended APIs of macOS. And now, they either have to use the APIs, or start writing their own from scratch in a 64 bit world...

If it were simple to port Media Pro, they could have sold it to someone else...
Thanks for the perspective. I wish the would have sold it to someone else. IMHO it's a good workhorse that just needs a little TLC ... but maybe like you said the port would be more trouble than it's worth.

OR ... may they just cannibalized it into their interface as much as they could/cared to and would rather not hand it to a competitor. Someone like Skylum who is soooo in need of a DAM for Luminar and is trying to be a Lr competitor could really use it. Although, sometimes integrating is sooo much harder than starting from scratch. Oh well, if wishes were fishes ...
When I retire in a couple of years, I will write my own DAM, server based, multi user, fast as hell, with all fancy AI stuff, ... Just kidding
 
If it were simple to port Media Pro, they could have sold it to someone else...
Along a similar lines to Picasa - if only Google had open-sourced it and given it to the community. Imagine how good it could be now.

I believe the reason they didn't was due to various non-open-source components used within the code. So perhaps the same issue applied to MediaPro.
When I retire in a couple of years, I will write my own DAM, server based, multi user, fast as hell, with all fancy AI stuff, ... Just kidding
Well, when I publish mine onto GitHub you'd be welcome to contribute. It's server-based, multi-user, fast as hell, and I'm considering putting face-recognition and other stuff like that into it eventually. :)

A few sneak preview screenshots... (note that I'm concentrating on the core functionality - so the UI needs a bit of polish...).

821cf0e322534e989a4e3b89457c54cd.jpg.png



cdfc93aeb82c44c584a19af262ab214c.jpg.png



498a7640599b420e91244a89c0602a07.jpg.png
 
I invested some time testing Photo Supreme including the Media Pro transition (which takes time for large catalogs, but otherwise works ok). The functionality of Photo Supreme is quite impressive. My biggest concern at this stage is performance. When I scroll though a large number of thumbnails, the software needs 1 or 2 seconds for each page until all thumbnails are there. I am not sure, if this is an extreme use case, but I regularly look through a large catalog and scroll through hundreds of pictures. This is easily possible with Media Pro and with Apple's Photo App, but not with Photo Supreme. For me this is a no-go. Does anyone have similar experience of the product or is it just me?
Oh that's not good to hear. I've just been playing with a small sample set so far so I haven't seen any performance issues. Hopefully yours is an isolated and fixable problem. I've got 300k+ images that I was planning on putting in one, or maybe two catalogs so if your performance is typical that's going to be a no-go for me as well.

Hopefully someone here has as large collection and can comment.
Every time, I work with Media Pro, I am sad, because it is such a great piece of software for my purposes and would only need a little refresher for geo tagging and face recognition.
I feel exactly the same way. I could even deal with no geo-tagging and face recognition if it would just go to 64-bit. I'm not a programmer but it doesn't seem like that would be a big deal to do but IDK.
I don't know any DAM solution on Apple Hardware, where scrolling through large collections of thumbnails (in the ten's of thousands) is as fast as Apple Photos or Media Pro. (As a remark, that's why I'm using Capture One in sessions mode and import the developed jpegs into Apple Photos, for getting an overview of my work and distributing some of them to my idevices for showing to e.g. friends of mine. Media Pro isn't working on my OS version anymore. I'm just a hobbyist).
That's interesting that you are using C1 in sessions with a large # of photos. I've been going back and forth with C1 support asking lots of questions and the support guy there said: "There's no issue with handling large numbers of images, the only thing to make it more stable and accessible is to use the referenced workflow for catalogs (note that sessions are not a good option for so many images)."

So did you choose sessions vs catalogs because it seemed faster or did it just fit your workflow better?
And as someone working in the software industry for decades, I can tell you, that the effort for migrating software from 32 to 64 bits can be anything between just recompiling (i.e. a couple of hours) and coming to the knowledge, that starting from scratch will be cheaper and faster.

I assume, that in the case of Media Pro, it was the latter. May be, it was as fast as it was, because of not (!) using all the recommended APIs of macOS. And now, they either have to use the APIs, or start writing their own from scratch in a 64 bit world...

If it were simple to port Media Pro, they could have sold it to someone else...
Thanks for the perspective. I wish the would have sold it to someone else. IMHO it's a good workhorse that just needs a little TLC ... but maybe like you said the port would be more trouble than it's worth.

OR ... may they just cannibalized it into their interface as much as they could/cared to and would rather not hand it to a competitor. Someone like Skylum who is soooo in need of a DAM for Luminar and is trying to be a Lr competitor could really use it. Although, sometimes integrating is sooo much harder than starting from scratch. Oh well, if wishes were fishes ...
When I retire in a couple of years, I will write my own DAM, server based, multi user, fast as hell, with all fancy AI stuff, ... Just kidding
I've been using Capture One for a bout 5 years now. Started with version 7.

Capture One's catalogs are a newer development than their sessions.

Mostly I use catalogs. It used to be that a catalog over 100k images was a big deal, but not so much anymore . But big catalogs do not start up very fast, my 15K catalog takes 20s or so to start up.

For large catalogs, referenced images is definitely the way to go.

C1 has a very useful Filters tool, but it gets really bogged down for large catalogs. I put mine on a separte tool tab, so the application does not try to use it when I am in the All Images folder.

Capture One was purchased and has new owners since June 2019. I am starting to see differnces in how the company is managed. Some better, some worse. Price is going up, Features are getting better, but sevice is getting a little worse.

You can have Capture One store its metadata in an XMP sidecar file, but I recall there some issues with other applications reading that content.

Capture One doesn't work very well if the Catalog file (bundle) is on a Network drive, has a tendency to hang if the the connection latency changes drastically.

A new Capture One is typically released every 4th quarter, woth some dot releases in between. If you want to keep a version of Capture Obe that runs on the latrst operating system and with the latest cameras, you will have to pay for an upgrade yearly.or every year or two (not sure if they allow you to jump versions for the upgrade price)

So far the annual upgrade price is about the cost of Adobe's annual subscription for Lightroom+Photoshop.

I see Capture One as a good replacement for Apple's Aperture, and a competitor to Lightroom. I don't think that it is a replacement for iViewMedia.

I have just been persuing the Neofinder documentation. I note thay have a number of comments about moving iViewMedia catalogs to Neofinder.
 
If it were simple to port Media Pro, they could have sold it to someone else...
Along a similar lines to Picasa - if only Google had open-sourced it and given it to the community. Imagine how good it could be now.

I believe the reason they didn't was due to various non-open-source components used within the code. So perhaps the same issue applied to MediaPro.
When I retire in a couple of years, I will write my own DAM, server based, multi user, fast as hell, with all fancy AI stuff, ... Just kidding
Well, when I publish mine onto GitHub you'd be welcome to contribute. It's server-based, multi-user, fast as hell, and I'm considering putting face-recognition and other stuff like that into it eventually. :)

A few sneak preview screenshots... (note that I'm concentrating on the core functionality - so the UI needs a bit of polish...).


821cf0e322534e989a4e3b89457c54cd.jpg.png

cdfc93aeb82c44c584a19af262ab214c.jpg.png

498a7640599b420e91244a89c0602a07.jpg.png
It looks very interesting :)

In the "About" pop-up of your Damselfy DAM app, it says "Powerd by Blazor". The wikipedia page for Blazor says "Blazor is a free and open-source web framework that enables developers to create web apps using C# and HTML. (...) It is being developed by Microsoft."

I guess the client side (web browser) can run on MacOS. Is it a self-contained web client app ?

And does the web server side run on MacOS ?

--
-JF
 
In the "About" pop-up of your Damselfy DAM app, it says "Powerd by Blazor". The wikipedia page for Blazor says "Blazor is a free and open-source web framework that enables developers to create web apps using C# and HTML. (...) It is being developed by Microsoft."

I guess the client side (web browser) can run on MacOS. Is it a self-contained web client app ?

And does the web server side run on MacOS ?
The app is written in C# and .Net Core, which means the server can run on Windows, Mac or Linux (it could also run on Android, I guess). I've dockerised it and run it on my Linux NAS, but do all the development (and hence run it for debugging) on MacOS. The client-side is all browser-based, so yes, will run on any OS with a browser (it works pretty well on my phone, but the adaptive UI needs streamlining to make it really usable.

c2b094a211114320962ff9436e32da90.jpg

There's two types of Blazor app - Blazor Server, and Blazor WASM. The latter takes the .Net runtime, cross-compiles it into javascript, and runs that in the browser. It means you can write complete browser-based applications in C#.

My app runs in Blazor Server, though, which is more akin to normal web-app development. All of the logic runs in the back end, and it acts as a webserver. The advantage of that is that it's simple to write the server-side logic for DB management, image scanning etc. The great thing is that all your code is in C# (or mixed HTML/C#) which means no Javascript - a boon for me.

I'm currently using SQLite for the DB and data management, which seems fine - even with 500k images and 20,000 keywords indexed it seems pretty quick. SQLite is super-easy for deployment but I might extend it in future to allow connection to Postgress or MySQL (a bit like the Digikam model, where you can use its own in-built DB, or point it at your own DB for better performance and concurrency). We'll see.

The other thing I've been looking at is using an Electron container to run the app - the idea being that it would give me more options to have native interactions with the client-side OS. So for example: currently, to work locally on some photos in (say) On1 or Lightroom, you click 'export' and it downloads a zip locally. You work on the images, and then you'd sync them back to the server. If the app was hosted in Electron, you could nominate a local Pictures folder, and as soon as you pick images to work on they'd sync locally, and they could be synced back to the server in a single click. E.g., "check out, work locally, check back in" - which would be a nice workflow.
 
Last edited:
In the "About" pop-up of your Damselfy DAM app, it says "Powerd by Blazor". The wikipedia page for Blazor says "Blazor is a free and open-source web framework that enables developers to create web apps using C# and HTML. (...) It is being developed by Microsoft."

I guess the client side (web browser) can run on MacOS. Is it a self-contained web client app ?

And does the web server side run on MacOS ?
The app is written in C# and .Net Core, which means the server can run on Windows, Mac or Linux (it could also run on Android, I guess). I've dockerised it and run it on my Linux NAS, but do all the development (and hence run it for debugging) on MacOS. The client-side is all browser-based, so yes, will run on any OS with a browser (it works pretty well on my phone, but the adaptive UI needs streamlining to make it really usable.

c2b094a211114320962ff9436e32da90.jpg

There's two types of Blazor app - Blazor Server, and Blazor WASM. The latter takes the .Net runtime, cross-compiles it into javascript, and runs that in the browser. It means you can write complete browser-based applications in C#.

My app runs in Blazor Server, though, which is more akin to normal web-app development. All of the logic runs in the back end, and it acts as a webserver. The advantage of that is that it's simple to write the server-side logic for DB management, image scanning etc. The great thing is that all your code is in C# (or mixed HTML/C#) which means no Javascript - a boon for me.

I'm currently using SQLite for the DB and data management, which seems fine - even with 500k images and 20,000 keywords indexed it seems pretty quick. SQLite is super-easy for deployment but I might extend it in future to allow connection to Postgress or MySQL (a bit like the Digikam model, where you can use its own in-built DB, or point it at your own DB for better performance and concurrency). We'll see.

The other thing I've been looking at is using an Electron container to run the app - the idea being that it would give me more options to have native interactions with the client-side OS. So for example: currently, to work locally on some photos in (say) On1 or Lightroom, you click 'export' and it downloads a zip locally. You work on the images, and then you'd sync them back to the server. If the app was hosted in Electron, you could nominate a local Pictures folder, and as soon as you pick images to work on they'd sync locally, and they could be synced back to the server in a single click. E.g., "check out, work locally, check back in" - which would be a nice workflow.
Thank you for this full explanation. I'm a software developer myself (C on Unix, specialized transactional server apps -- no web nor webapp development).

I do agree that developing in C# should be more fun than Javascript --- this is a personal point of view, so JS fans should not take offence ;)

If the SQLite data manipulation is single-thread or thread-safe, it might not need to be converted to MySQL or Postgres, unless there could be a significant performance gain.

Changing to an Electron container would be worthwhile if it really helps going back and forth between the DAM app and a RAW developer (DXO, ON1, ...) or to an image editor (Affinity, Ps, ...). On the other hand, using Electron would be another step toward the Google monopoly and its subjugation of web browser technology. ... Nonetheless, users will almost always prefer speed over "vendor independence".

There would be a kind of elegance if the app stayed server-based and could use (almost) any web browser (Safari, Firefox, Chrome/Chromium).

--
-JF
 
I do agree that developing in C# should be more fun than Javascript --- this is a personal point of view, so JS fans should not take offence ;)
Heh. I think we're in agreement about JS. ;)
If the SQLite data manipulation is single-thread or thread-safe, it might not need to be converted to MySQL or Postgres, unless there could be a significant performance gain.
Yes, I suspect it'll be fine so far - it absolutely zips along for collections of 10K images, and a full-text search query returns results in around 250ms or less with 500k images. I guess my main concern is how well it'll work if you have more than > 2 users, but we'll cross that bridge once I have it working for 1 user. ;)
Changing to an Electron container would be worthwhile if it really helps going back and forth between the DAM app and a RAW developer (DXO, ON1, ...) or to an image editor (Affinity, Ps, ...). On the other hand, using Electron would be another step toward the Google monopoly and its subjugation of web browser technology. ... Nonetheless, users will almost always prefer speed over "vendor independence".
There would be a kind of elegance if the app stayed server-based and could use (almost) any web browser (Safari, Firefox, Chrome/Chromium).
Yes, of course. The idea would be that the electron container would be entirely optional; primarily you'd run the app in the browser, but there'd be appropriate hooks in place so that if you launch the app in Electron, the more native integration with the local file-system would be enabled. It would be more of an add-on/enhancement, rather than anything mandatory. There is another alternative, which is to use the native chrome filesystem API , but that's pretty experimental. Perhaps by the time I get around to writing that part, somebody will have wrapped it in a Blazor library. ;)

As for the subjugation of browser technology, I think that happened the moment that Microsoft switched to use WebKit. But that's for another thread. :D
 
If the SQLite data manipulation is single-thread or thread-safe, it might not need to be converted to MySQL or Postgres, unless there could be a significant performance gain.
Yes, I suspect it'll be fine so far - it absolutely zips along for collections of 10K images, and a full-text search query returns results in around 250ms or less with 500k images. I guess my main concern is how well it'll work if you have more than > 2 users, but we'll cross that bridge once I have it working for 1 user. ;)
For enthusiast photographers, the 95-98% use case is probably single user.

I think that multi-user would be more likely for commercial use. And then, licensing should cost more. That would cover the cost of further development.
Changing to an Electron container would be worthwhile if it really helps going back and forth between the DAM app and a RAW developer (DXO, ON1, ...) or to an image editor (Affinity, Ps, ...). On the other hand, using Electron would be another step toward the Google monopoly and its subjugation of web browser technology. ... Nonetheless, users will almost always prefer speed over "vendor independence".

There would be a kind of elegance if the app stayed server-based and could use (almost) any web browser (Safari, Firefox, Chrome/Chromium).
Yes, of course. The idea would be that the electron container would be entirely optional; primarily you'd run the app in the browser, but there'd be appropriate hooks in place so that if you launch the app in Electron, the more native integration with the local file-system would be enabled. It would be more of an add-on/enhancement, rather than anything mandatory. There is another alternative, which is to use the native chrome filesystem API , but that's pretty experimental. Perhaps by the time I get around to writing that part, somebody will have wrapped it in a Blazor library. ;)
If the server part is on the same machine as the client part, then, sending a picture to an external editor could be done by the server part. The server part would copy the picture to a specific directory and start an app, pointing it to picture copy.
As for the subjugation of browser technology, I think that happened the moment that Microsoft switched to use WebKit. But that's for another thread. :D
Didn't MS jumped in much deeper when it started to use Blink, the Chromium engine ?
 
For enthusiast photographers, the 95-98% use case is probably single user.
Yep, I agree.
I think that multi-user would be more likely for commercial use. And then, licensing should cost more. That would cover the cost of further development.
Well, I'm not planning to sell this, as it would conflict with my day job - but again, that's a question for the future. Once it's on GitHub, if people want to collaborate and enhance to make it multi-user, that would be awesome.
If the server part is on the same machine as the client part, then, sending a picture to an external editor could be done by the server part. The server part would copy the picture to a specific directory and start an app, pointing it to picture copy.
I'm not really sure that's a sensible or likely use-case. If the server and photos are running on your laptop or PC, then you're as well to just use Lightroom/On1 etc directly over your collection. But maybe you're right...
 
For enthusiast photographers, the 95-98% use case is probably single user.
Yep, I agree.
I think that multi-user would be more likely for commercial use. And then, licensing should cost more. That would cover the cost of further development.
Well, I'm not planning to sell this, as it would conflict with my day job - but again, that's a question for the future. Once it's on GitHub, if people want to collaborate and enhance to make it multi-user, that would be awesome.
Indeed, the day job has priority -- it is quite uncertain to make a profit by writing a new app.

As for GitHub, you might want to think carefully about the type of licensing you want to offer.

Some "clever" people might take the source code, run away, and turn it into a commercial product... and pretend they have a right to it. Example: nginx web server (not exactly the same scenario, but, a bad story).
If the server part is on the same machine as the client part, then, sending a picture to an external editor could be done by the server part. The server part would copy the picture to a specific directory and start an app, pointing it to picture copy.
I'm not really sure that's a sensible or likely use-case. If the server and photos are running on your laptop or PC, then you're as well to just use Lightroom/On1 etc directly over your collection. But maybe you're right...
If your app handles scrolling through 300K of pictures quickly, manage & search keywords and metadata, and, can start an external editor (Ps, Affinity, DXo, ON1, etc), then, it does not matter if the server and the photos are on the laptop or PC. Because it would already be a useful DAM. :)

Though, I see the point of running the server on a separate machine/PC: it can then be used from various client instance in the same house or office.
 

Keyboard shortcuts

Back
Top