For the science to be good would take the efforts of many hundreds of people, or one person
for many dozens of years perhaps. What group could afford the cost of such an endeavor?
Science must have credibility and ... a sufficiently broad base of repeatable testing performed and recorded
with accurate means which could be independently verified and tested against a standard. Testing also
involves rigs, machinery perhaps, time and of course lots of rope of many different types and sizes.
The bottom line for such testing, apart from the time it would take, is money.
There is something to be gained w/o so much cost. Agent_Smith on this forum
has already done more with some home-brew testing than was available to us
prior to his good efforts, aided by comments from this community. So, at last,
we got some marker threads in knots and remnant in broken specimens and thus
an indication of rupture points for some knots in some cordage -- a start. This
technique (marking the rope; photographing said marked specimen during and
after testing) obviates the drive for renting some super-high-speed camera
(which in the case of the (Mclaren? Milne?) report concluded that even that
was too slow for good identification of break points).
And I hope to use the technique in submitting some specimens for testing.
Certainly the testing would have to be repeated say a dozen or more times with the same type of line
and then again with a different thickness of line. It would then have to be repeated dozens more times with
a different tyer, or three or four or more different tyers, each perhaps working on the basis of a standardized
testing instruction on the basis of a standardized method of tying instruction.
Currently we see use of data that has not even the beginning of these considerations.
Surely even just explicitly recorded & reported testing will be an advance -- recall,
one usually cannot know even which end of a Fig.8 eyeknot is loaded!!! --or which
rope in a Sheet Bend breaks?!! JUST KNOWING THAT would be progress, albeit small.
As for the full implication of considering all the various factors, I have hopes that (as
I once wrote here) some statistical reasoning can greatly reduce the need for test
cases. Esp. e.g. in some things that should be non-factors, such as rope diameter
-- or might be seen to have some
drift in some cases of full testing (let's say
that a thickness rises, maybe % strength slightly diminishes), and then can be
assumed and spot-checked in further cases (and if something falls out of expected
range, THEN do more testing and re-thinking).
Heck, it might be that one benefit of some limited set of rigorous testing amounts
to a clear advisory: ONLY USE TEST RESULTS OF EXACT MATERIALS ... . That
would be progress over relying on some supposed "knot" strength which is supposed
to come with the structure irrespective of material.
Certainly the IGKT could offer such testing were there any standard accepted by a worldwide community.
I think that the prior step is this: to collect test data/reports as they are available,
and to examine them for relevance & shortcomings -- to test their adequacy against
a battery of questions. I think we'll find them all coming up well short of thoroughness
(and here I mean especially in detail of the report -- that it will be easy to demonstrate
that one cannot perform repeated, verifying testing, because of so many unknowns).
From this beginning, in an iterative build-&-critique process, a test-standard can be
developed. The IGKT can apply this to new testing that comes to our attention,
and maybe proactively encourage some places to adopt it for future testing.
If that happens (adoption), then future test reports will be more useful.
... there are differing opinions in the matter of which knot is best for which application
- how do we set a standard with so many dissenting opinions?
Well, noting the differences is only the first pass; then the differing POVs are pressed
for a rationale, for backing, and that gets examined. As you have done, asking for
evidence/rationale against the Blackwall Hitch which you have used w/o problem
-- your usage in evidence of its (at least sometimes) effectiveness.
Unfortunately, I lost some list of misc. test reports in a dead computer,
but that can be re-collected (and maybe from the computer somehow).
--------------
Another cheap-testing example.
LCurious et al. have disparaged the
Offset Ring Bend (aka "EDK (Euro.DeathKnot)", "Thumb
Bend", "#1410", "Dbl.Overhand Knot", ...). (I like "ORB" because it uses "offset" against
an understood (sometimes) knot name "Ring Bend", and works to establish "offset"
as such a handy prefix/modifier -- "Offset Grapevine Bend", "Offset Fig.8 Bend", etc..)
What do they cite? -- rumors of failure (I know of only one dubious accident report
for the ORB; of a fatal one for the OF8g.

, and testing that shows it can "roll" (flype).
Yes, well, it has inverted (into itself but nearer its ends), but at loads WAY above
what abseilers should generate -- say, 800#, when this knot is used in but ONE of
the two lines supporting the load of maybe 350#.
So, relevant testing for this knot is something that each climber should be able
to perform using a few 'biners for pulley sheaves (if lacking something better)
and body weight: they will be thus (1) applying much greater force in such
a test than is expected in practice, and (2) in materials of their choosing (which
should be similar to any they'll find when climbing with associates) -- such as
with 7mm haul line tied to 10.5mm dynamic rope. What can aid them in this
is a well-presented guide on tying the knot, on orienting it so that the thinner
rope is
<here>, and so on. They can even test the wrong way (which
in some cases of back-up (stoppered tail) should still work, albeit with less
impediment to flyping.
But to the point: the above is very cheap/easy home testing that is also
immediately relevant -- more so than costly break testing on a device.
But, alas, folks are more likely to heed fearmongering on the WWWeb,
which comes in 1- or 2-sentence dribbles and is echoed likewise,
and where a few paragraphs of explanatory guidance is mocked
(too much for the cell-phone display, HELP!).
--dl*
====