If you are looking for optimal color balance, then it really would be G type stars that should be white. G-type stars are really "yellow-white", and generally much whiter than yellow.
There is a way to get "proper" color balance reference to our Sun. It is called G2V calibration, as that is the specific type of our Sun. G2V calibration, which can be done with eXcalibrator (free) and Photoshop, PixInsight, and a number of other programs, will use star catalogs and plate solving to identify G2V type stars within your image (if there are any, usually there are with wider field images, maybe more difficult with narrow field images like galaxies; Galaxies are better balanced by determining an aggregate white point from the collective stars in the galaxies in the image anyway), and determining the white point from them. You can then simply plug the provided red, white and blue white point value into whatever program you are using to calibrate.
There is documentation on how to use eXcalibrator with Photoshop, if that is what you are interested in doing.
One caveat. G2V calibration is not infalliable. It can fail, especially if not enough stars of the right type are found, or if you end up having to use the NOMAD catalog (which tends to be often) which can require additional calibration of the catalog stars itself. You should know if the results are off, they will look decidedly too red, blue, or green if that happens.
Thanks for the indepth response! Is G2V calibration better than direct calibration where I can just enter the Kelvin temperature manually into the camera?
I'm also confused about these star catalogs, why don't we calibrate off of F stars, if they are considered "pure white" rather than G type stars, which are yellowish-white?
I believe surface temps of F type stars is around 6500K, if I entered that as the WB temp, would not sunlike stars appear yellow-white?
If it is done right, I do believe so. There are limitations. G2V, at least if done with eXcalibrator, will not account for atmospheric extinction, nor is it capable of dealing with color shifts caused by changing transparency. Not automatically, anyway. There ARE ways to handle those issues with eXcalibrator, it is a little involved and advanced. However, in my experience, barring color casts from LP, any overhead imaging 30 degrees over the horizon at least, to the meridian, will usually calibrate well with eXcalibrator. As such, it works best with data from a good dark site, and not as well with data from the city. It also requires LRGB data, or DSLR data which has had a synthetic luminance extracted and plate solved.
As for color temperatures. A color temperature around 5500K, give or take, is going to be pure white. The sun, assuming we could actually see it with direct vision, is affected by atmospheric scattering (Rayleigh scattering), which affects the observed color. It's elevation above the horizon also affect's it's color, as the amount of scattering is dependent upon the depth of the column of atmosphere to the observer. So measured from on the Earth, our sun, when directly overhead at the zenith, would be a "yellow-white". Measured from outside our atmosphere, it should measure a fairly pure white, because none of it's light is being scattered. Near the horizon, more blue light is scattered due to the increased depth of the atmosphere, so the sun appears orange or even orangish-red.
In general, I think the discrepancy between an overhead-sun "yellow-white" vs. a true "pure white" is small enough that it does not affect color calibration enough to truly matter. It certainly affects it much less than light pollution, either from the moon or from city lighting. Depending on solar activity, even airglow and possibly aurora can affect color balance. I've even experienced fairly significant changes in the color of my sub's background skies simply due to the passage of very thin layers of clouds that I cannot even see, yet which pick up and reflect the city light from Denver fifty miles away. All of those sources of LP tend to introduce gradients and skew accurate color. These gradients are often inconsistent in color across the field, and there can be multiple layers of different gradients with different orientations. I find that these gradients have a fairly significant impact on my ability to color balance properly. As such, I spend a lot of time in PixInsight carefully identifying and extracting all of the gradients in my images before I do any color calibration. What discrepancies may result from less than perfect G2V calibration, or simply using the self-referenced calibration that PixInsight does itself (which tends to differ from G2V calibration by a fairly small amount in most cases), are not enough for me to worry about for the most part.
In the end, I'm creating art. I'm not a scientist. As much as this is a 'scientific hobby', it is still more art than science. The very very vast majority of astrophotographers are hobbyists and artists, very few are actually scientists, and even those that are are still usually not actually doing science with their astrophotography. Discrepancies in color calibration usually do not matter. They aren't significant enough, few people notice, as most aren't even attuned to what truly is "scientifically accurate" to even know what to look for. Color calibration isn't a fixed "this is correct, that aint" kind of thing in the end. Not for the artistic 'scientists' we are.
None of my images are 100% perfectly scientifically calibrated to ideal G2V, accounting for extinction and atmospheric conditions. To do that for every image would be extremely complicated, significantly more effort...and it just doesn't matter. All of these images are reasonably accurately color calibrated. I tend to opt for color diversity, rather than die hard accuracy in star colors (I usually have a good deal more whiter yellow-white stars, yellow stars, and oranger stars than bluer stars...however scientifically speaking, I should probably have even more yellow stars and many bluer stars should probably be more neutral white...but I don't think that looks as good.

)
Don't worry about being so exact. These images were all taken under widely varying conditions. Some had simple gradients, some had very complex gradients. Some had a lot more LP than others (i.e. some were imaged under 20.95mag/sq" skies, others were imaged under 21.5mag/sq" skies which is almost a stop difference). Some had decent seeing, some had terrible seeing. Some are only ~3-4 hours of integration, one is over eight hours of integration.
In the end, all those various factors don't matter as much as how you deal with them when you process. I have spent more of my time over the last two years invested in learning how to deal with the data I am able to gather, as more often than not the skies are cloudy, or have a moon in them, or are riddled with bad seeing and excessively bloated stars, and imaging is simply impossible. On the rare occasions when the skies are clear, dark, and stable, you gather what data you can. You'll often make mistakes, forget to set white balance, or set it wrong. Or the skies just aren't quite as good as you thought (high, thin clouds injecting a massive orange-brown tinged gradient). Whatever the issue is...the vast majority of them can be taken care of during processing.
Acquire more. Stack more. Process more.

In the end, those are the fundamental means by which you will create the best integrations, and get the best results.